Dec  1 04:08:53 np0005540825 kernel: Linux version 5.14.0-642.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025
Dec  1 04:08:53 np0005540825 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec  1 04:08:53 np0005540825 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  1 04:08:53 np0005540825 kernel: BIOS-provided physical RAM map:
Dec  1 04:08:53 np0005540825 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec  1 04:08:53 np0005540825 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec  1 04:08:53 np0005540825 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec  1 04:08:53 np0005540825 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Dec  1 04:08:53 np0005540825 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Dec  1 04:08:53 np0005540825 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec  1 04:08:53 np0005540825 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec  1 04:08:53 np0005540825 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Dec  1 04:08:53 np0005540825 kernel: NX (Execute Disable) protection: active
Dec  1 04:08:53 np0005540825 kernel: APIC: Static calls initialized
Dec  1 04:08:53 np0005540825 kernel: SMBIOS 2.8 present.
Dec  1 04:08:53 np0005540825 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Dec  1 04:08:53 np0005540825 kernel: Hypervisor detected: KVM
Dec  1 04:08:53 np0005540825 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec  1 04:08:53 np0005540825 kernel: kvm-clock: using sched offset of 3337739292 cycles
Dec  1 04:08:53 np0005540825 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec  1 04:08:53 np0005540825 kernel: tsc: Detected 2800.000 MHz processor
Dec  1 04:08:53 np0005540825 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Dec  1 04:08:53 np0005540825 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Dec  1 04:08:53 np0005540825 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec  1 04:08:53 np0005540825 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Dec  1 04:08:53 np0005540825 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Dec  1 04:08:53 np0005540825 kernel: Using GB pages for direct mapping
Dec  1 04:08:53 np0005540825 kernel: RAMDISK: [mem 0x2d83a000-0x32c14fff]
Dec  1 04:08:53 np0005540825 kernel: ACPI: Early table checksum verification disabled
Dec  1 04:08:53 np0005540825 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Dec  1 04:08:53 np0005540825 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  1 04:08:53 np0005540825 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  1 04:08:53 np0005540825 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  1 04:08:53 np0005540825 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Dec  1 04:08:53 np0005540825 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  1 04:08:53 np0005540825 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  1 04:08:53 np0005540825 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Dec  1 04:08:53 np0005540825 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Dec  1 04:08:53 np0005540825 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Dec  1 04:08:53 np0005540825 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Dec  1 04:08:53 np0005540825 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Dec  1 04:08:53 np0005540825 kernel: No NUMA configuration found
Dec  1 04:08:53 np0005540825 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Dec  1 04:08:53 np0005540825 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Dec  1 04:08:53 np0005540825 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Dec  1 04:08:53 np0005540825 kernel: Zone ranges:
Dec  1 04:08:53 np0005540825 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec  1 04:08:53 np0005540825 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec  1 04:08:53 np0005540825 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Dec  1 04:08:53 np0005540825 kernel:  Device   empty
Dec  1 04:08:53 np0005540825 kernel: Movable zone start for each node
Dec  1 04:08:53 np0005540825 kernel: Early memory node ranges
Dec  1 04:08:53 np0005540825 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec  1 04:08:53 np0005540825 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Dec  1 04:08:53 np0005540825 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Dec  1 04:08:53 np0005540825 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Dec  1 04:08:53 np0005540825 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec  1 04:08:53 np0005540825 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec  1 04:08:53 np0005540825 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec  1 04:08:53 np0005540825 kernel: ACPI: PM-Timer IO Port: 0x608
Dec  1 04:08:53 np0005540825 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec  1 04:08:53 np0005540825 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec  1 04:08:53 np0005540825 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec  1 04:08:53 np0005540825 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec  1 04:08:53 np0005540825 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec  1 04:08:53 np0005540825 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec  1 04:08:53 np0005540825 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec  1 04:08:53 np0005540825 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec  1 04:08:53 np0005540825 kernel: TSC deadline timer available
Dec  1 04:08:53 np0005540825 kernel: CPU topo: Max. logical packages:   8
Dec  1 04:08:53 np0005540825 kernel: CPU topo: Max. logical dies:       8
Dec  1 04:08:53 np0005540825 kernel: CPU topo: Max. dies per package:   1
Dec  1 04:08:53 np0005540825 kernel: CPU topo: Max. threads per core:   1
Dec  1 04:08:53 np0005540825 kernel: CPU topo: Num. cores per package:     1
Dec  1 04:08:53 np0005540825 kernel: CPU topo: Num. threads per package:   1
Dec  1 04:08:53 np0005540825 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Dec  1 04:08:53 np0005540825 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Dec  1 04:08:53 np0005540825 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec  1 04:08:53 np0005540825 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec  1 04:08:53 np0005540825 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec  1 04:08:53 np0005540825 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec  1 04:08:53 np0005540825 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Dec  1 04:08:53 np0005540825 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Dec  1 04:08:53 np0005540825 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec  1 04:08:53 np0005540825 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec  1 04:08:53 np0005540825 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec  1 04:08:53 np0005540825 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Dec  1 04:08:53 np0005540825 kernel: Booting paravirtualized kernel on KVM
Dec  1 04:08:53 np0005540825 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec  1 04:08:53 np0005540825 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Dec  1 04:08:53 np0005540825 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Dec  1 04:08:53 np0005540825 kernel: kvm-guest: PV spinlocks disabled, no host support
Dec  1 04:08:53 np0005540825 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  1 04:08:53 np0005540825 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64", will be passed to user space.
Dec  1 04:08:53 np0005540825 kernel: random: crng init done
Dec  1 04:08:53 np0005540825 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec  1 04:08:53 np0005540825 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec  1 04:08:53 np0005540825 kernel: Fallback order for Node 0: 0 
Dec  1 04:08:53 np0005540825 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Dec  1 04:08:53 np0005540825 kernel: Policy zone: Normal
Dec  1 04:08:53 np0005540825 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec  1 04:08:53 np0005540825 kernel: software IO TLB: area num 8.
Dec  1 04:08:53 np0005540825 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Dec  1 04:08:53 np0005540825 kernel: ftrace: allocating 49313 entries in 193 pages
Dec  1 04:08:53 np0005540825 kernel: ftrace: allocated 193 pages with 3 groups
Dec  1 04:08:53 np0005540825 kernel: Dynamic Preempt: voluntary
Dec  1 04:08:53 np0005540825 kernel: rcu: Preemptible hierarchical RCU implementation.
Dec  1 04:08:53 np0005540825 kernel: rcu: #011RCU event tracing is enabled.
Dec  1 04:08:53 np0005540825 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Dec  1 04:08:53 np0005540825 kernel: #011Trampoline variant of Tasks RCU enabled.
Dec  1 04:08:53 np0005540825 kernel: #011Rude variant of Tasks RCU enabled.
Dec  1 04:08:53 np0005540825 kernel: #011Tracing variant of Tasks RCU enabled.
Dec  1 04:08:53 np0005540825 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec  1 04:08:53 np0005540825 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Dec  1 04:08:53 np0005540825 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  1 04:08:53 np0005540825 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  1 04:08:53 np0005540825 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  1 04:08:53 np0005540825 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Dec  1 04:08:53 np0005540825 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec  1 04:08:53 np0005540825 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec  1 04:08:53 np0005540825 kernel: Console: colour VGA+ 80x25
Dec  1 04:08:53 np0005540825 kernel: printk: console [ttyS0] enabled
Dec  1 04:08:53 np0005540825 kernel: ACPI: Core revision 20230331
Dec  1 04:08:53 np0005540825 kernel: APIC: Switch to symmetric I/O mode setup
Dec  1 04:08:53 np0005540825 kernel: x2apic enabled
Dec  1 04:08:53 np0005540825 kernel: APIC: Switched APIC routing to: physical x2apic
Dec  1 04:08:53 np0005540825 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec  1 04:08:53 np0005540825 kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Dec  1 04:08:53 np0005540825 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec  1 04:08:53 np0005540825 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec  1 04:08:53 np0005540825 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec  1 04:08:53 np0005540825 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec  1 04:08:53 np0005540825 kernel: Spectre V2 : Mitigation: Retpolines
Dec  1 04:08:53 np0005540825 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Dec  1 04:08:53 np0005540825 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Dec  1 04:08:53 np0005540825 kernel: RETBleed: Mitigation: untrained return thunk
Dec  1 04:08:53 np0005540825 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec  1 04:08:53 np0005540825 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec  1 04:08:53 np0005540825 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Dec  1 04:08:53 np0005540825 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Dec  1 04:08:53 np0005540825 kernel: x86/bugs: return thunk changed
Dec  1 04:08:53 np0005540825 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Dec  1 04:08:53 np0005540825 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec  1 04:08:53 np0005540825 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec  1 04:08:53 np0005540825 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec  1 04:08:53 np0005540825 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec  1 04:08:53 np0005540825 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Dec  1 04:08:53 np0005540825 kernel: Freeing SMP alternatives memory: 40K
Dec  1 04:08:53 np0005540825 kernel: pid_max: default: 32768 minimum: 301
Dec  1 04:08:53 np0005540825 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Dec  1 04:08:53 np0005540825 kernel: landlock: Up and running.
Dec  1 04:08:53 np0005540825 kernel: Yama: becoming mindful.
Dec  1 04:08:53 np0005540825 kernel: SELinux:  Initializing.
Dec  1 04:08:53 np0005540825 kernel: LSM support for eBPF active
Dec  1 04:08:53 np0005540825 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec  1 04:08:53 np0005540825 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec  1 04:08:53 np0005540825 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Dec  1 04:08:53 np0005540825 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec  1 04:08:53 np0005540825 kernel: ... version:                0
Dec  1 04:08:53 np0005540825 kernel: ... bit width:              48
Dec  1 04:08:53 np0005540825 kernel: ... generic registers:      6
Dec  1 04:08:53 np0005540825 kernel: ... value mask:             0000ffffffffffff
Dec  1 04:08:53 np0005540825 kernel: ... max period:             00007fffffffffff
Dec  1 04:08:53 np0005540825 kernel: ... fixed-purpose events:   0
Dec  1 04:08:53 np0005540825 kernel: ... event mask:             000000000000003f
Dec  1 04:08:53 np0005540825 kernel: signal: max sigframe size: 1776
Dec  1 04:08:53 np0005540825 kernel: rcu: Hierarchical SRCU implementation.
Dec  1 04:08:53 np0005540825 kernel: rcu: #011Max phase no-delay instances is 400.
Dec  1 04:08:53 np0005540825 kernel: smp: Bringing up secondary CPUs ...
Dec  1 04:08:53 np0005540825 kernel: smpboot: x86: Booting SMP configuration:
Dec  1 04:08:53 np0005540825 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Dec  1 04:08:53 np0005540825 kernel: smp: Brought up 1 node, 8 CPUs
Dec  1 04:08:53 np0005540825 kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Dec  1 04:08:53 np0005540825 kernel: node 0 deferred pages initialised in 6ms
Dec  1 04:08:53 np0005540825 kernel: Memory: 7765892K/8388068K available (16384K kernel code, 5787K rwdata, 13900K rodata, 4192K init, 7172K bss, 616268K reserved, 0K cma-reserved)
Dec  1 04:08:53 np0005540825 kernel: devtmpfs: initialized
Dec  1 04:08:53 np0005540825 kernel: x86/mm: Memory block size: 128MB
Dec  1 04:08:53 np0005540825 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec  1 04:08:53 np0005540825 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Dec  1 04:08:53 np0005540825 kernel: pinctrl core: initialized pinctrl subsystem
Dec  1 04:08:53 np0005540825 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec  1 04:08:53 np0005540825 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Dec  1 04:08:53 np0005540825 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec  1 04:08:53 np0005540825 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec  1 04:08:53 np0005540825 kernel: audit: initializing netlink subsys (disabled)
Dec  1 04:08:53 np0005540825 kernel: audit: type=2000 audit(1764580131.521:1): state=initialized audit_enabled=0 res=1
Dec  1 04:08:53 np0005540825 kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec  1 04:08:53 np0005540825 kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec  1 04:08:53 np0005540825 kernel: thermal_sys: Registered thermal governor 'user_space'
Dec  1 04:08:53 np0005540825 kernel: cpuidle: using governor menu
Dec  1 04:08:53 np0005540825 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec  1 04:08:53 np0005540825 kernel: PCI: Using configuration type 1 for base access
Dec  1 04:08:53 np0005540825 kernel: PCI: Using configuration type 1 for extended access
Dec  1 04:08:53 np0005540825 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec  1 04:08:53 np0005540825 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec  1 04:08:53 np0005540825 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Dec  1 04:08:53 np0005540825 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec  1 04:08:53 np0005540825 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec  1 04:08:53 np0005540825 kernel: Demotion targets for Node 0: null
Dec  1 04:08:53 np0005540825 kernel: cryptd: max_cpu_qlen set to 1000
Dec  1 04:08:53 np0005540825 kernel: ACPI: Added _OSI(Module Device)
Dec  1 04:08:53 np0005540825 kernel: ACPI: Added _OSI(Processor Device)
Dec  1 04:08:53 np0005540825 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec  1 04:08:53 np0005540825 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec  1 04:08:53 np0005540825 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec  1 04:08:53 np0005540825 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Dec  1 04:08:53 np0005540825 kernel: ACPI: Interpreter enabled
Dec  1 04:08:53 np0005540825 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Dec  1 04:08:53 np0005540825 kernel: ACPI: Using IOAPIC for interrupt routing
Dec  1 04:08:53 np0005540825 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec  1 04:08:53 np0005540825 kernel: PCI: Using E820 reservations for host bridge windows
Dec  1 04:08:53 np0005540825 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec  1 04:08:53 np0005540825 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec  1 04:08:53 np0005540825 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec  1 04:08:53 np0005540825 kernel: acpiphp: Slot [3] registered
Dec  1 04:08:53 np0005540825 kernel: acpiphp: Slot [4] registered
Dec  1 04:08:53 np0005540825 kernel: acpiphp: Slot [5] registered
Dec  1 04:08:53 np0005540825 kernel: acpiphp: Slot [6] registered
Dec  1 04:08:53 np0005540825 kernel: acpiphp: Slot [7] registered
Dec  1 04:08:53 np0005540825 kernel: acpiphp: Slot [8] registered
Dec  1 04:08:53 np0005540825 kernel: acpiphp: Slot [9] registered
Dec  1 04:08:53 np0005540825 kernel: acpiphp: Slot [10] registered
Dec  1 04:08:53 np0005540825 kernel: acpiphp: Slot [11] registered
Dec  1 04:08:53 np0005540825 kernel: acpiphp: Slot [12] registered
Dec  1 04:08:53 np0005540825 kernel: acpiphp: Slot [13] registered
Dec  1 04:08:53 np0005540825 kernel: acpiphp: Slot [14] registered
Dec  1 04:08:53 np0005540825 kernel: acpiphp: Slot [15] registered
Dec  1 04:08:53 np0005540825 kernel: acpiphp: Slot [16] registered
Dec  1 04:08:53 np0005540825 kernel: acpiphp: Slot [17] registered
Dec  1 04:08:53 np0005540825 kernel: acpiphp: Slot [18] registered
Dec  1 04:08:53 np0005540825 kernel: acpiphp: Slot [19] registered
Dec  1 04:08:53 np0005540825 kernel: acpiphp: Slot [20] registered
Dec  1 04:08:53 np0005540825 kernel: acpiphp: Slot [21] registered
Dec  1 04:08:53 np0005540825 kernel: acpiphp: Slot [22] registered
Dec  1 04:08:53 np0005540825 kernel: acpiphp: Slot [23] registered
Dec  1 04:08:53 np0005540825 kernel: acpiphp: Slot [24] registered
Dec  1 04:08:53 np0005540825 kernel: acpiphp: Slot [25] registered
Dec  1 04:08:53 np0005540825 kernel: acpiphp: Slot [26] registered
Dec  1 04:08:53 np0005540825 kernel: acpiphp: Slot [27] registered
Dec  1 04:08:53 np0005540825 kernel: acpiphp: Slot [28] registered
Dec  1 04:08:53 np0005540825 kernel: acpiphp: Slot [29] registered
Dec  1 04:08:53 np0005540825 kernel: acpiphp: Slot [30] registered
Dec  1 04:08:53 np0005540825 kernel: acpiphp: Slot [31] registered
Dec  1 04:08:53 np0005540825 kernel: PCI host bridge to bus 0000:00
Dec  1 04:08:53 np0005540825 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec  1 04:08:53 np0005540825 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec  1 04:08:53 np0005540825 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec  1 04:08:53 np0005540825 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec  1 04:08:53 np0005540825 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Dec  1 04:08:53 np0005540825 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec  1 04:08:53 np0005540825 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Dec  1 04:08:53 np0005540825 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Dec  1 04:08:53 np0005540825 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Dec  1 04:08:53 np0005540825 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Dec  1 04:08:53 np0005540825 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Dec  1 04:08:53 np0005540825 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Dec  1 04:08:53 np0005540825 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Dec  1 04:08:53 np0005540825 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Dec  1 04:08:53 np0005540825 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Dec  1 04:08:53 np0005540825 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Dec  1 04:08:53 np0005540825 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Dec  1 04:08:53 np0005540825 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec  1 04:08:53 np0005540825 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec  1 04:08:53 np0005540825 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Dec  1 04:08:53 np0005540825 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Dec  1 04:08:53 np0005540825 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Dec  1 04:08:53 np0005540825 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Dec  1 04:08:53 np0005540825 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Dec  1 04:08:53 np0005540825 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec  1 04:08:53 np0005540825 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec  1 04:08:53 np0005540825 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Dec  1 04:08:53 np0005540825 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Dec  1 04:08:53 np0005540825 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Dec  1 04:08:53 np0005540825 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Dec  1 04:08:53 np0005540825 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Dec  1 04:08:53 np0005540825 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Dec  1 04:08:53 np0005540825 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Dec  1 04:08:53 np0005540825 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec  1 04:08:53 np0005540825 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Dec  1 04:08:53 np0005540825 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Dec  1 04:08:53 np0005540825 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec  1 04:08:53 np0005540825 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Dec  1 04:08:53 np0005540825 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Dec  1 04:08:53 np0005540825 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Dec  1 04:08:53 np0005540825 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec  1 04:08:53 np0005540825 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec  1 04:08:53 np0005540825 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec  1 04:08:53 np0005540825 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec  1 04:08:53 np0005540825 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec  1 04:08:53 np0005540825 kernel: iommu: Default domain type: Translated
Dec  1 04:08:53 np0005540825 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec  1 04:08:53 np0005540825 kernel: SCSI subsystem initialized
Dec  1 04:08:53 np0005540825 kernel: ACPI: bus type USB registered
Dec  1 04:08:53 np0005540825 kernel: usbcore: registered new interface driver usbfs
Dec  1 04:08:53 np0005540825 kernel: usbcore: registered new interface driver hub
Dec  1 04:08:53 np0005540825 kernel: usbcore: registered new device driver usb
Dec  1 04:08:53 np0005540825 kernel: pps_core: LinuxPPS API ver. 1 registered
Dec  1 04:08:53 np0005540825 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec  1 04:08:53 np0005540825 kernel: PTP clock support registered
Dec  1 04:08:53 np0005540825 kernel: EDAC MC: Ver: 3.0.0
Dec  1 04:08:53 np0005540825 kernel: NetLabel: Initializing
Dec  1 04:08:53 np0005540825 kernel: NetLabel:  domain hash size = 128
Dec  1 04:08:53 np0005540825 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec  1 04:08:53 np0005540825 kernel: NetLabel:  unlabeled traffic allowed by default
Dec  1 04:08:53 np0005540825 kernel: PCI: Using ACPI for IRQ routing
Dec  1 04:08:53 np0005540825 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec  1 04:08:53 np0005540825 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec  1 04:08:53 np0005540825 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec  1 04:08:53 np0005540825 kernel: vgaarb: loaded
Dec  1 04:08:53 np0005540825 kernel: clocksource: Switched to clocksource kvm-clock
Dec  1 04:08:53 np0005540825 kernel: VFS: Disk quotas dquot_6.6.0
Dec  1 04:08:53 np0005540825 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec  1 04:08:53 np0005540825 kernel: pnp: PnP ACPI init
Dec  1 04:08:53 np0005540825 kernel: pnp: PnP ACPI: found 5 devices
Dec  1 04:08:53 np0005540825 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec  1 04:08:53 np0005540825 kernel: NET: Registered PF_INET protocol family
Dec  1 04:08:53 np0005540825 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec  1 04:08:53 np0005540825 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Dec  1 04:08:53 np0005540825 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec  1 04:08:53 np0005540825 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec  1 04:08:53 np0005540825 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec  1 04:08:53 np0005540825 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Dec  1 04:08:53 np0005540825 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Dec  1 04:08:53 np0005540825 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec  1 04:08:53 np0005540825 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec  1 04:08:53 np0005540825 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec  1 04:08:53 np0005540825 kernel: NET: Registered PF_XDP protocol family
Dec  1 04:08:53 np0005540825 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec  1 04:08:53 np0005540825 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec  1 04:08:53 np0005540825 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec  1 04:08:53 np0005540825 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Dec  1 04:08:53 np0005540825 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Dec  1 04:08:53 np0005540825 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec  1 04:08:53 np0005540825 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec  1 04:08:53 np0005540825 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec  1 04:08:53 np0005540825 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 83003 usecs
Dec  1 04:08:53 np0005540825 kernel: PCI: CLS 0 bytes, default 64
Dec  1 04:08:53 np0005540825 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec  1 04:08:53 np0005540825 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Dec  1 04:08:53 np0005540825 kernel: Trying to unpack rootfs image as initramfs...
Dec  1 04:08:53 np0005540825 kernel: ACPI: bus type thunderbolt registered
Dec  1 04:08:53 np0005540825 kernel: Initialise system trusted keyrings
Dec  1 04:08:53 np0005540825 kernel: Key type blacklist registered
Dec  1 04:08:53 np0005540825 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Dec  1 04:08:53 np0005540825 kernel: zbud: loaded
Dec  1 04:08:53 np0005540825 kernel: integrity: Platform Keyring initialized
Dec  1 04:08:53 np0005540825 kernel: integrity: Machine keyring initialized
Dec  1 04:08:53 np0005540825 kernel: Freeing initrd memory: 85868K
Dec  1 04:08:53 np0005540825 kernel: NET: Registered PF_ALG protocol family
Dec  1 04:08:53 np0005540825 kernel: xor: automatically using best checksumming function   avx       
Dec  1 04:08:53 np0005540825 kernel: Key type asymmetric registered
Dec  1 04:08:53 np0005540825 kernel: Asymmetric key parser 'x509' registered
Dec  1 04:08:53 np0005540825 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec  1 04:08:53 np0005540825 kernel: io scheduler mq-deadline registered
Dec  1 04:08:53 np0005540825 kernel: io scheduler kyber registered
Dec  1 04:08:53 np0005540825 kernel: io scheduler bfq registered
Dec  1 04:08:53 np0005540825 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec  1 04:08:53 np0005540825 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec  1 04:08:53 np0005540825 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec  1 04:08:53 np0005540825 kernel: ACPI: button: Power Button [PWRF]
Dec  1 04:08:53 np0005540825 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec  1 04:08:53 np0005540825 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec  1 04:08:53 np0005540825 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec  1 04:08:53 np0005540825 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec  1 04:08:53 np0005540825 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec  1 04:08:53 np0005540825 kernel: Non-volatile memory driver v1.3
Dec  1 04:08:53 np0005540825 kernel: rdac: device handler registered
Dec  1 04:08:53 np0005540825 kernel: hp_sw: device handler registered
Dec  1 04:08:53 np0005540825 kernel: emc: device handler registered
Dec  1 04:08:53 np0005540825 kernel: alua: device handler registered
Dec  1 04:08:53 np0005540825 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Dec  1 04:08:53 np0005540825 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Dec  1 04:08:53 np0005540825 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Dec  1 04:08:53 np0005540825 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Dec  1 04:08:53 np0005540825 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec  1 04:08:53 np0005540825 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec  1 04:08:53 np0005540825 kernel: usb usb1: Product: UHCI Host Controller
Dec  1 04:08:53 np0005540825 kernel: usb usb1: Manufacturer: Linux 5.14.0-642.el9.x86_64 uhci_hcd
Dec  1 04:08:53 np0005540825 kernel: usb usb1: SerialNumber: 0000:00:01.2
Dec  1 04:08:53 np0005540825 kernel: hub 1-0:1.0: USB hub found
Dec  1 04:08:53 np0005540825 kernel: hub 1-0:1.0: 2 ports detected
Dec  1 04:08:53 np0005540825 kernel: usbcore: registered new interface driver usbserial_generic
Dec  1 04:08:53 np0005540825 kernel: usbserial: USB Serial support registered for generic
Dec  1 04:08:53 np0005540825 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec  1 04:08:53 np0005540825 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec  1 04:08:53 np0005540825 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec  1 04:08:53 np0005540825 kernel: mousedev: PS/2 mouse device common for all mice
Dec  1 04:08:53 np0005540825 kernel: rtc_cmos 00:04: RTC can wake from S4
Dec  1 04:08:53 np0005540825 kernel: rtc_cmos 00:04: registered as rtc0
Dec  1 04:08:53 np0005540825 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec  1 04:08:53 np0005540825 kernel: rtc_cmos 00:04: setting system clock to 2025-12-01T09:08:52 UTC (1764580132)
Dec  1 04:08:53 np0005540825 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec  1 04:08:53 np0005540825 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Dec  1 04:08:53 np0005540825 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Dec  1 04:08:53 np0005540825 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec  1 04:08:53 np0005540825 kernel: hid: raw HID events driver (C) Jiri Kosina
Dec  1 04:08:53 np0005540825 kernel: usbcore: registered new interface driver usbhid
Dec  1 04:08:53 np0005540825 kernel: usbhid: USB HID core driver
Dec  1 04:08:53 np0005540825 kernel: drop_monitor: Initializing network drop monitor service
Dec  1 04:08:53 np0005540825 kernel: Initializing XFRM netlink socket
Dec  1 04:08:53 np0005540825 kernel: NET: Registered PF_INET6 protocol family
Dec  1 04:08:53 np0005540825 kernel: Segment Routing with IPv6
Dec  1 04:08:53 np0005540825 kernel: NET: Registered PF_PACKET protocol family
Dec  1 04:08:53 np0005540825 kernel: mpls_gso: MPLS GSO support
Dec  1 04:08:53 np0005540825 kernel: IPI shorthand broadcast: enabled
Dec  1 04:08:53 np0005540825 kernel: AVX2 version of gcm_enc/dec engaged.
Dec  1 04:08:53 np0005540825 kernel: AES CTR mode by8 optimization enabled
Dec  1 04:08:53 np0005540825 kernel: sched_clock: Marking stable (1233001440, 154260590)->(1507530199, -120268169)
Dec  1 04:08:53 np0005540825 kernel: registered taskstats version 1
Dec  1 04:08:53 np0005540825 kernel: Loading compiled-in X.509 certificates
Dec  1 04:08:53 np0005540825 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Dec  1 04:08:53 np0005540825 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec  1 04:08:53 np0005540825 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec  1 04:08:53 np0005540825 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Dec  1 04:08:53 np0005540825 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Dec  1 04:08:53 np0005540825 kernel: Demotion targets for Node 0: null
Dec  1 04:08:53 np0005540825 kernel: page_owner is disabled
Dec  1 04:08:53 np0005540825 kernel: Key type .fscrypt registered
Dec  1 04:08:53 np0005540825 kernel: Key type fscrypt-provisioning registered
Dec  1 04:08:53 np0005540825 kernel: Key type big_key registered
Dec  1 04:08:53 np0005540825 kernel: Key type encrypted registered
Dec  1 04:08:53 np0005540825 kernel: ima: No TPM chip found, activating TPM-bypass!
Dec  1 04:08:53 np0005540825 kernel: Loading compiled-in module X.509 certificates
Dec  1 04:08:53 np0005540825 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Dec  1 04:08:53 np0005540825 kernel: ima: Allocated hash algorithm: sha256
Dec  1 04:08:53 np0005540825 kernel: ima: No architecture policies found
Dec  1 04:08:53 np0005540825 kernel: evm: Initialising EVM extended attributes:
Dec  1 04:08:53 np0005540825 kernel: evm: security.selinux
Dec  1 04:08:53 np0005540825 kernel: evm: security.SMACK64 (disabled)
Dec  1 04:08:53 np0005540825 kernel: evm: security.SMACK64EXEC (disabled)
Dec  1 04:08:53 np0005540825 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec  1 04:08:53 np0005540825 kernel: evm: security.SMACK64MMAP (disabled)
Dec  1 04:08:53 np0005540825 kernel: evm: security.apparmor (disabled)
Dec  1 04:08:53 np0005540825 kernel: evm: security.ima
Dec  1 04:08:53 np0005540825 kernel: evm: security.capability
Dec  1 04:08:53 np0005540825 kernel: evm: HMAC attrs: 0x1
Dec  1 04:08:53 np0005540825 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec  1 04:08:53 np0005540825 kernel: Running certificate verification RSA selftest
Dec  1 04:08:53 np0005540825 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec  1 04:08:53 np0005540825 kernel: Running certificate verification ECDSA selftest
Dec  1 04:08:53 np0005540825 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Dec  1 04:08:53 np0005540825 kernel: clk: Disabling unused clocks
Dec  1 04:08:53 np0005540825 kernel: Freeing unused decrypted memory: 2028K
Dec  1 04:08:53 np0005540825 kernel: Freeing unused kernel image (initmem) memory: 4192K
Dec  1 04:08:53 np0005540825 kernel: Write protecting the kernel read-only data: 30720k
Dec  1 04:08:53 np0005540825 kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Dec  1 04:08:53 np0005540825 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec  1 04:08:53 np0005540825 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec  1 04:08:53 np0005540825 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec  1 04:08:53 np0005540825 kernel: usb 1-1: Product: QEMU USB Tablet
Dec  1 04:08:53 np0005540825 kernel: usb 1-1: Manufacturer: QEMU
Dec  1 04:08:53 np0005540825 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Dec  1 04:08:53 np0005540825 kernel: Run /init as init process
Dec  1 04:08:53 np0005540825 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec  1 04:08:53 np0005540825 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Dec  1 04:08:53 np0005540825 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec  1 04:08:53 np0005540825 systemd: Detected virtualization kvm.
Dec  1 04:08:53 np0005540825 systemd: Detected architecture x86-64.
Dec  1 04:08:53 np0005540825 systemd: Running in initrd.
Dec  1 04:08:53 np0005540825 systemd: No hostname configured, using default hostname.
Dec  1 04:08:53 np0005540825 systemd: Hostname set to <localhost>.
Dec  1 04:08:53 np0005540825 systemd: Initializing machine ID from VM UUID.
Dec  1 04:08:53 np0005540825 systemd: Queued start job for default target Initrd Default Target.
Dec  1 04:08:53 np0005540825 systemd: Started Dispatch Password Requests to Console Directory Watch.
Dec  1 04:08:53 np0005540825 systemd: Reached target Local Encrypted Volumes.
Dec  1 04:08:53 np0005540825 systemd: Reached target Initrd /usr File System.
Dec  1 04:08:53 np0005540825 systemd: Reached target Local File Systems.
Dec  1 04:08:53 np0005540825 systemd: Reached target Path Units.
Dec  1 04:08:53 np0005540825 systemd: Reached target Slice Units.
Dec  1 04:08:53 np0005540825 systemd: Reached target Swaps.
Dec  1 04:08:53 np0005540825 systemd: Reached target Timer Units.
Dec  1 04:08:53 np0005540825 systemd: Listening on D-Bus System Message Bus Socket.
Dec  1 04:08:53 np0005540825 systemd: Listening on Journal Socket (/dev/log).
Dec  1 04:08:53 np0005540825 systemd: Listening on Journal Socket.
Dec  1 04:08:53 np0005540825 systemd: Listening on udev Control Socket.
Dec  1 04:08:53 np0005540825 systemd: Listening on udev Kernel Socket.
Dec  1 04:08:53 np0005540825 systemd: Reached target Socket Units.
Dec  1 04:08:53 np0005540825 systemd: Starting Create List of Static Device Nodes...
Dec  1 04:08:53 np0005540825 systemd: Starting Journal Service...
Dec  1 04:08:53 np0005540825 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec  1 04:08:53 np0005540825 systemd: Starting Apply Kernel Variables...
Dec  1 04:08:53 np0005540825 systemd: Starting Create System Users...
Dec  1 04:08:53 np0005540825 systemd: Starting Setup Virtual Console...
Dec  1 04:08:53 np0005540825 systemd: Finished Create List of Static Device Nodes.
Dec  1 04:08:53 np0005540825 systemd: Finished Apply Kernel Variables.
Dec  1 04:08:53 np0005540825 systemd-journald[305]: Journal started
Dec  1 04:08:53 np0005540825 systemd-journald[305]: Runtime Journal (/run/log/journal/4cd03307de0c4b81bfb4f23408ecf241) is 8.0M, max 153.6M, 145.6M free.
Dec  1 04:08:53 np0005540825 systemd-sysusers[310]: Creating group 'users' with GID 100.
Dec  1 04:08:53 np0005540825 systemd-sysusers[310]: Creating group 'dbus' with GID 81.
Dec  1 04:08:53 np0005540825 systemd-sysusers[310]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec  1 04:08:53 np0005540825 systemd: Started Journal Service.
Dec  1 04:08:53 np0005540825 systemd[1]: Finished Create System Users.
Dec  1 04:08:53 np0005540825 systemd[1]: Starting Create Static Device Nodes in /dev...
Dec  1 04:08:53 np0005540825 systemd[1]: Starting Create Volatile Files and Directories...
Dec  1 04:08:53 np0005540825 systemd[1]: Finished Create Static Device Nodes in /dev.
Dec  1 04:08:53 np0005540825 systemd[1]: Finished Setup Virtual Console.
Dec  1 04:08:53 np0005540825 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec  1 04:08:53 np0005540825 systemd[1]: Starting dracut cmdline hook...
Dec  1 04:08:53 np0005540825 systemd[1]: Finished Create Volatile Files and Directories.
Dec  1 04:08:53 np0005540825 dracut-cmdline[323]: dracut-9 dracut-057-102.git20250818.el9
Dec  1 04:08:53 np0005540825 dracut-cmdline[323]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  1 04:08:53 np0005540825 systemd[1]: Finished dracut cmdline hook.
Dec  1 04:08:53 np0005540825 systemd[1]: Starting dracut pre-udev hook...
Dec  1 04:08:53 np0005540825 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec  1 04:08:53 np0005540825 kernel: device-mapper: uevent: version 1.0.3
Dec  1 04:08:53 np0005540825 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Dec  1 04:08:53 np0005540825 kernel: RPC: Registered named UNIX socket transport module.
Dec  1 04:08:53 np0005540825 kernel: RPC: Registered udp transport module.
Dec  1 04:08:53 np0005540825 kernel: RPC: Registered tcp transport module.
Dec  1 04:08:53 np0005540825 kernel: RPC: Registered tcp-with-tls transport module.
Dec  1 04:08:53 np0005540825 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec  1 04:08:53 np0005540825 rpc.statd[441]: Version 2.5.4 starting
Dec  1 04:08:53 np0005540825 rpc.statd[441]: Initializing NSM state
Dec  1 04:08:54 np0005540825 rpc.idmapd[446]: Setting log level to 0
Dec  1 04:08:54 np0005540825 systemd[1]: Finished dracut pre-udev hook.
Dec  1 04:08:54 np0005540825 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec  1 04:08:54 np0005540825 systemd-udevd[459]: Using default interface naming scheme 'rhel-9.0'.
Dec  1 04:08:54 np0005540825 systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec  1 04:08:54 np0005540825 systemd[1]: Starting dracut pre-trigger hook...
Dec  1 04:08:54 np0005540825 systemd[1]: Finished dracut pre-trigger hook.
Dec  1 04:08:54 np0005540825 systemd[1]: Starting Coldplug All udev Devices...
Dec  1 04:08:54 np0005540825 systemd[1]: Created slice Slice /system/modprobe.
Dec  1 04:08:54 np0005540825 systemd[1]: Starting Load Kernel Module configfs...
Dec  1 04:08:54 np0005540825 systemd[1]: Finished Coldplug All udev Devices.
Dec  1 04:08:54 np0005540825 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  1 04:08:54 np0005540825 systemd[1]: Finished Load Kernel Module configfs.
Dec  1 04:08:54 np0005540825 systemd[1]: Mounting Kernel Configuration File System...
Dec  1 04:08:54 np0005540825 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec  1 04:08:54 np0005540825 systemd[1]: Reached target Network.
Dec  1 04:08:54 np0005540825 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec  1 04:08:54 np0005540825 systemd[1]: Starting dracut initqueue hook...
Dec  1 04:08:54 np0005540825 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Dec  1 04:08:54 np0005540825 systemd[1]: Mounted Kernel Configuration File System.
Dec  1 04:08:54 np0005540825 systemd[1]: Reached target System Initialization.
Dec  1 04:08:54 np0005540825 systemd[1]: Reached target Basic System.
Dec  1 04:08:54 np0005540825 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Dec  1 04:08:54 np0005540825 kernel: vda: vda1
Dec  1 04:08:54 np0005540825 kernel: scsi host0: ata_piix
Dec  1 04:08:54 np0005540825 kernel: scsi host1: ata_piix
Dec  1 04:08:54 np0005540825 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Dec  1 04:08:54 np0005540825 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Dec  1 04:08:54 np0005540825 systemd-udevd[460]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 04:08:54 np0005540825 systemd[1]: Found device /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Dec  1 04:08:54 np0005540825 systemd[1]: Reached target Initrd Root Device.
Dec  1 04:08:54 np0005540825 kernel: ata1: found unknown device (class 0)
Dec  1 04:08:54 np0005540825 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec  1 04:08:54 np0005540825 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec  1 04:08:54 np0005540825 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec  1 04:08:54 np0005540825 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec  1 04:08:54 np0005540825 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec  1 04:08:54 np0005540825 systemd[1]: Finished dracut initqueue hook.
Dec  1 04:08:54 np0005540825 systemd[1]: Reached target Preparation for Remote File Systems.
Dec  1 04:08:54 np0005540825 systemd[1]: Reached target Remote Encrypted Volumes.
Dec  1 04:08:54 np0005540825 systemd[1]: Reached target Remote File Systems.
Dec  1 04:08:54 np0005540825 systemd[1]: Starting dracut pre-mount hook...
Dec  1 04:08:54 np0005540825 systemd[1]: Finished dracut pre-mount hook.
Dec  1 04:08:54 np0005540825 systemd[1]: Starting File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253...
Dec  1 04:08:54 np0005540825 systemd-fsck[556]: /usr/sbin/fsck.xfs: XFS file system.
Dec  1 04:08:54 np0005540825 systemd[1]: Finished File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Dec  1 04:08:54 np0005540825 systemd[1]: Mounting /sysroot...
Dec  1 04:08:55 np0005540825 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec  1 04:08:55 np0005540825 kernel: XFS (vda1): Mounting V5 Filesystem b277050f-8ace-464d-abb6-4c46d4c45253
Dec  1 04:08:55 np0005540825 kernel: XFS (vda1): Ending clean mount
Dec  1 04:08:55 np0005540825 systemd[1]: Mounted /sysroot.
Dec  1 04:08:55 np0005540825 systemd[1]: Reached target Initrd Root File System.
Dec  1 04:08:55 np0005540825 systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec  1 04:08:55 np0005540825 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec  1 04:08:55 np0005540825 systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec  1 04:08:55 np0005540825 systemd[1]: Reached target Initrd File Systems.
Dec  1 04:08:55 np0005540825 systemd[1]: Reached target Initrd Default Target.
Dec  1 04:08:55 np0005540825 systemd[1]: Starting dracut mount hook...
Dec  1 04:08:55 np0005540825 systemd[1]: Finished dracut mount hook.
Dec  1 04:08:55 np0005540825 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec  1 04:08:55 np0005540825 rpc.idmapd[446]: exiting on signal 15
Dec  1 04:08:55 np0005540825 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec  1 04:08:55 np0005540825 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec  1 04:08:55 np0005540825 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec  1 04:08:55 np0005540825 systemd[1]: Stopped target Network.
Dec  1 04:08:55 np0005540825 systemd[1]: Stopped target Remote Encrypted Volumes.
Dec  1 04:08:55 np0005540825 systemd[1]: Stopped target Timer Units.
Dec  1 04:08:55 np0005540825 systemd[1]: dbus.socket: Deactivated successfully.
Dec  1 04:08:55 np0005540825 systemd[1]: Closed D-Bus System Message Bus Socket.
Dec  1 04:08:55 np0005540825 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec  1 04:08:55 np0005540825 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec  1 04:08:55 np0005540825 systemd[1]: Stopped target Initrd Default Target.
Dec  1 04:08:55 np0005540825 systemd[1]: Stopped target Basic System.
Dec  1 04:08:55 np0005540825 systemd[1]: Stopped target Initrd Root Device.
Dec  1 04:08:55 np0005540825 systemd[1]: Stopped target Initrd /usr File System.
Dec  1 04:08:55 np0005540825 systemd[1]: Stopped target Path Units.
Dec  1 04:08:55 np0005540825 systemd[1]: Stopped target Remote File Systems.
Dec  1 04:08:55 np0005540825 systemd[1]: Stopped target Preparation for Remote File Systems.
Dec  1 04:08:55 np0005540825 systemd[1]: Stopped target Slice Units.
Dec  1 04:08:55 np0005540825 systemd[1]: Stopped target Socket Units.
Dec  1 04:08:55 np0005540825 systemd[1]: Stopped target System Initialization.
Dec  1 04:08:55 np0005540825 systemd[1]: Stopped target Local File Systems.
Dec  1 04:08:55 np0005540825 systemd[1]: Stopped target Swaps.
Dec  1 04:08:55 np0005540825 systemd[1]: dracut-mount.service: Deactivated successfully.
Dec  1 04:08:55 np0005540825 systemd[1]: Stopped dracut mount hook.
Dec  1 04:08:55 np0005540825 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec  1 04:08:55 np0005540825 systemd[1]: Stopped dracut pre-mount hook.
Dec  1 04:08:55 np0005540825 systemd[1]: Stopped target Local Encrypted Volumes.
Dec  1 04:08:55 np0005540825 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec  1 04:08:55 np0005540825 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec  1 04:08:55 np0005540825 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec  1 04:08:55 np0005540825 systemd[1]: Stopped dracut initqueue hook.
Dec  1 04:08:55 np0005540825 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec  1 04:08:55 np0005540825 systemd[1]: Stopped Apply Kernel Variables.
Dec  1 04:08:55 np0005540825 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec  1 04:08:55 np0005540825 systemd[1]: Stopped Create Volatile Files and Directories.
Dec  1 04:08:55 np0005540825 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec  1 04:08:55 np0005540825 systemd[1]: Stopped Coldplug All udev Devices.
Dec  1 04:08:55 np0005540825 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec  1 04:08:55 np0005540825 systemd[1]: Stopped dracut pre-trigger hook.
Dec  1 04:08:55 np0005540825 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec  1 04:08:55 np0005540825 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec  1 04:08:55 np0005540825 systemd[1]: Stopped Setup Virtual Console.
Dec  1 04:08:55 np0005540825 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Dec  1 04:08:55 np0005540825 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec  1 04:08:55 np0005540825 systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec  1 04:08:55 np0005540825 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec  1 04:08:55 np0005540825 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec  1 04:08:55 np0005540825 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec  1 04:08:55 np0005540825 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec  1 04:08:55 np0005540825 systemd[1]: Closed udev Control Socket.
Dec  1 04:08:55 np0005540825 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec  1 04:08:55 np0005540825 systemd[1]: Closed udev Kernel Socket.
Dec  1 04:08:55 np0005540825 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec  1 04:08:55 np0005540825 systemd[1]: Stopped dracut pre-udev hook.
Dec  1 04:08:55 np0005540825 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec  1 04:08:55 np0005540825 systemd[1]: Stopped dracut cmdline hook.
Dec  1 04:08:55 np0005540825 systemd[1]: Starting Cleanup udev Database...
Dec  1 04:08:55 np0005540825 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec  1 04:08:55 np0005540825 systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec  1 04:08:55 np0005540825 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec  1 04:08:55 np0005540825 systemd[1]: Stopped Create List of Static Device Nodes.
Dec  1 04:08:55 np0005540825 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec  1 04:08:55 np0005540825 systemd[1]: Stopped Create System Users.
Dec  1 04:08:55 np0005540825 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Dec  1 04:08:55 np0005540825 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Dec  1 04:08:55 np0005540825 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec  1 04:08:55 np0005540825 systemd[1]: Finished Cleanup udev Database.
Dec  1 04:08:55 np0005540825 systemd[1]: Reached target Switch Root.
Dec  1 04:08:55 np0005540825 systemd[1]: Starting Switch Root...
Dec  1 04:08:55 np0005540825 systemd[1]: Switching root.
Dec  1 04:08:55 np0005540825 systemd-journald[305]: Journal stopped
Dec  1 04:08:56 np0005540825 systemd-journald: Received SIGTERM from PID 1 (systemd).
Dec  1 04:08:56 np0005540825 kernel: audit: type=1404 audit(1764580135.923:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Dec  1 04:08:56 np0005540825 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 04:08:56 np0005540825 kernel: SELinux:  policy capability open_perms=1
Dec  1 04:08:56 np0005540825 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 04:08:56 np0005540825 kernel: SELinux:  policy capability always_check_network=0
Dec  1 04:08:56 np0005540825 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 04:08:56 np0005540825 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 04:08:56 np0005540825 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 04:08:56 np0005540825 kernel: audit: type=1403 audit(1764580136.056:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec  1 04:08:56 np0005540825 systemd: Successfully loaded SELinux policy in 136.138ms.
Dec  1 04:08:56 np0005540825 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.668ms.
Dec  1 04:08:56 np0005540825 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec  1 04:08:56 np0005540825 systemd: Detected virtualization kvm.
Dec  1 04:08:56 np0005540825 systemd: Detected architecture x86-64.
Dec  1 04:08:56 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:08:56 np0005540825 systemd: initrd-switch-root.service: Deactivated successfully.
Dec  1 04:08:56 np0005540825 systemd: Stopped Switch Root.
Dec  1 04:08:56 np0005540825 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec  1 04:08:56 np0005540825 systemd: Created slice Slice /system/getty.
Dec  1 04:08:56 np0005540825 systemd: Created slice Slice /system/serial-getty.
Dec  1 04:08:56 np0005540825 systemd: Created slice Slice /system/sshd-keygen.
Dec  1 04:08:56 np0005540825 systemd: Created slice User and Session Slice.
Dec  1 04:08:56 np0005540825 systemd: Started Dispatch Password Requests to Console Directory Watch.
Dec  1 04:08:56 np0005540825 systemd: Started Forward Password Requests to Wall Directory Watch.
Dec  1 04:08:56 np0005540825 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Dec  1 04:08:56 np0005540825 systemd: Reached target Local Encrypted Volumes.
Dec  1 04:08:56 np0005540825 systemd: Stopped target Switch Root.
Dec  1 04:08:56 np0005540825 systemd: Stopped target Initrd File Systems.
Dec  1 04:08:56 np0005540825 systemd: Stopped target Initrd Root File System.
Dec  1 04:08:56 np0005540825 systemd: Reached target Local Integrity Protected Volumes.
Dec  1 04:08:56 np0005540825 systemd: Reached target Path Units.
Dec  1 04:08:56 np0005540825 systemd: Reached target rpc_pipefs.target.
Dec  1 04:08:56 np0005540825 systemd: Reached target Slice Units.
Dec  1 04:08:56 np0005540825 systemd: Reached target Swaps.
Dec  1 04:08:56 np0005540825 systemd: Reached target Local Verity Protected Volumes.
Dec  1 04:08:56 np0005540825 systemd: Listening on RPCbind Server Activation Socket.
Dec  1 04:08:56 np0005540825 systemd: Reached target RPC Port Mapper.
Dec  1 04:08:56 np0005540825 systemd: Listening on Process Core Dump Socket.
Dec  1 04:08:56 np0005540825 systemd: Listening on initctl Compatibility Named Pipe.
Dec  1 04:08:56 np0005540825 systemd: Listening on udev Control Socket.
Dec  1 04:08:56 np0005540825 systemd: Listening on udev Kernel Socket.
Dec  1 04:08:56 np0005540825 systemd: Mounting Huge Pages File System...
Dec  1 04:08:56 np0005540825 systemd: Mounting POSIX Message Queue File System...
Dec  1 04:08:56 np0005540825 systemd: Mounting Kernel Debug File System...
Dec  1 04:08:56 np0005540825 systemd: Mounting Kernel Trace File System...
Dec  1 04:08:56 np0005540825 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec  1 04:08:56 np0005540825 systemd: Starting Create List of Static Device Nodes...
Dec  1 04:08:56 np0005540825 systemd: Starting Load Kernel Module configfs...
Dec  1 04:08:56 np0005540825 systemd: Starting Load Kernel Module drm...
Dec  1 04:08:56 np0005540825 systemd: Starting Load Kernel Module efi_pstore...
Dec  1 04:08:56 np0005540825 systemd: Starting Load Kernel Module fuse...
Dec  1 04:08:56 np0005540825 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Dec  1 04:08:56 np0005540825 systemd: systemd-fsck-root.service: Deactivated successfully.
Dec  1 04:08:56 np0005540825 systemd: Stopped File System Check on Root Device.
Dec  1 04:08:56 np0005540825 systemd: Stopped Journal Service.
Dec  1 04:08:56 np0005540825 kernel: fuse: init (API version 7.37)
Dec  1 04:08:56 np0005540825 systemd: Starting Journal Service...
Dec  1 04:08:56 np0005540825 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec  1 04:08:56 np0005540825 systemd: Starting Generate network units from Kernel command line...
Dec  1 04:08:56 np0005540825 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  1 04:08:56 np0005540825 systemd: Starting Remount Root and Kernel File Systems...
Dec  1 04:08:56 np0005540825 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Dec  1 04:08:56 np0005540825 systemd: Starting Apply Kernel Variables...
Dec  1 04:08:56 np0005540825 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Dec  1 04:08:56 np0005540825 systemd: Starting Coldplug All udev Devices...
Dec  1 04:08:56 np0005540825 systemd-journald[680]: Journal started
Dec  1 04:08:56 np0005540825 systemd-journald[680]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Dec  1 04:08:56 np0005540825 systemd[1]: Queued start job for default target Multi-User System.
Dec  1 04:08:56 np0005540825 systemd[1]: systemd-journald.service: Deactivated successfully.
Dec  1 04:08:56 np0005540825 systemd: Started Journal Service.
Dec  1 04:08:56 np0005540825 systemd[1]: Mounted Huge Pages File System.
Dec  1 04:08:56 np0005540825 systemd[1]: Mounted POSIX Message Queue File System.
Dec  1 04:08:56 np0005540825 systemd[1]: Mounted Kernel Debug File System.
Dec  1 04:08:56 np0005540825 systemd[1]: Mounted Kernel Trace File System.
Dec  1 04:08:56 np0005540825 systemd[1]: Finished Create List of Static Device Nodes.
Dec  1 04:08:56 np0005540825 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  1 04:08:56 np0005540825 systemd[1]: Finished Load Kernel Module configfs.
Dec  1 04:08:56 np0005540825 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec  1 04:08:56 np0005540825 systemd[1]: Finished Load Kernel Module efi_pstore.
Dec  1 04:08:56 np0005540825 kernel: ACPI: bus type drm_connector registered
Dec  1 04:08:56 np0005540825 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec  1 04:08:56 np0005540825 systemd[1]: Finished Load Kernel Module fuse.
Dec  1 04:08:56 np0005540825 systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec  1 04:08:56 np0005540825 systemd[1]: Finished Load Kernel Module drm.
Dec  1 04:08:56 np0005540825 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Dec  1 04:08:56 np0005540825 systemd[1]: Finished Generate network units from Kernel command line.
Dec  1 04:08:56 np0005540825 systemd[1]: Finished Remount Root and Kernel File Systems.
Dec  1 04:08:56 np0005540825 systemd[1]: Finished Apply Kernel Variables.
Dec  1 04:08:56 np0005540825 systemd[1]: Mounting FUSE Control File System...
Dec  1 04:08:56 np0005540825 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec  1 04:08:56 np0005540825 systemd[1]: Starting Rebuild Hardware Database...
Dec  1 04:08:56 np0005540825 systemd[1]: Starting Flush Journal to Persistent Storage...
Dec  1 04:08:56 np0005540825 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec  1 04:08:56 np0005540825 systemd[1]: Starting Load/Save OS Random Seed...
Dec  1 04:08:56 np0005540825 systemd[1]: Starting Create System Users...
Dec  1 04:08:56 np0005540825 systemd[1]: Mounted FUSE Control File System.
Dec  1 04:08:56 np0005540825 systemd-journald[680]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Dec  1 04:08:56 np0005540825 systemd-journald[680]: Received client request to flush runtime journal.
Dec  1 04:08:56 np0005540825 systemd[1]: Finished Flush Journal to Persistent Storage.
Dec  1 04:08:56 np0005540825 systemd[1]: Finished Load/Save OS Random Seed.
Dec  1 04:08:56 np0005540825 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec  1 04:08:56 np0005540825 systemd[1]: Finished Create System Users.
Dec  1 04:08:56 np0005540825 systemd[1]: Starting Create Static Device Nodes in /dev...
Dec  1 04:08:56 np0005540825 systemd[1]: Finished Coldplug All udev Devices.
Dec  1 04:08:56 np0005540825 systemd[1]: Finished Create Static Device Nodes in /dev.
Dec  1 04:08:56 np0005540825 systemd[1]: Reached target Preparation for Local File Systems.
Dec  1 04:08:56 np0005540825 systemd[1]: Reached target Local File Systems.
Dec  1 04:08:56 np0005540825 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Dec  1 04:08:56 np0005540825 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Dec  1 04:08:56 np0005540825 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec  1 04:08:56 np0005540825 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Dec  1 04:08:56 np0005540825 systemd[1]: Starting Automatic Boot Loader Update...
Dec  1 04:08:56 np0005540825 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Dec  1 04:08:56 np0005540825 systemd[1]: Starting Create Volatile Files and Directories...
Dec  1 04:08:56 np0005540825 bootctl[697]: Couldn't find EFI system partition, skipping.
Dec  1 04:08:56 np0005540825 systemd[1]: Finished Automatic Boot Loader Update.
Dec  1 04:08:56 np0005540825 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Dec  1 04:08:56 np0005540825 systemd[1]: Finished Create Volatile Files and Directories.
Dec  1 04:08:56 np0005540825 systemd[1]: Starting Security Auditing Service...
Dec  1 04:08:56 np0005540825 systemd[1]: Starting RPC Bind...
Dec  1 04:08:56 np0005540825 systemd[1]: Starting Rebuild Journal Catalog...
Dec  1 04:08:56 np0005540825 auditd[703]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Dec  1 04:08:56 np0005540825 auditd[703]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Dec  1 04:08:56 np0005540825 systemd[1]: Finished Rebuild Journal Catalog.
Dec  1 04:08:56 np0005540825 augenrules[708]: /sbin/augenrules: No change
Dec  1 04:08:56 np0005540825 systemd[1]: Started RPC Bind.
Dec  1 04:08:56 np0005540825 augenrules[723]: No rules
Dec  1 04:08:56 np0005540825 augenrules[723]: enabled 1
Dec  1 04:08:56 np0005540825 augenrules[723]: failure 1
Dec  1 04:08:56 np0005540825 augenrules[723]: pid 703
Dec  1 04:08:56 np0005540825 augenrules[723]: rate_limit 0
Dec  1 04:08:56 np0005540825 augenrules[723]: backlog_limit 8192
Dec  1 04:08:56 np0005540825 augenrules[723]: lost 0
Dec  1 04:08:56 np0005540825 augenrules[723]: backlog 1
Dec  1 04:08:56 np0005540825 augenrules[723]: backlog_wait_time 60000
Dec  1 04:08:56 np0005540825 augenrules[723]: backlog_wait_time_actual 0
Dec  1 04:08:56 np0005540825 augenrules[723]: enabled 1
Dec  1 04:08:56 np0005540825 augenrules[723]: failure 1
Dec  1 04:08:56 np0005540825 augenrules[723]: pid 703
Dec  1 04:08:56 np0005540825 augenrules[723]: rate_limit 0
Dec  1 04:08:56 np0005540825 augenrules[723]: backlog_limit 8192
Dec  1 04:08:56 np0005540825 augenrules[723]: lost 0
Dec  1 04:08:56 np0005540825 augenrules[723]: backlog 0
Dec  1 04:08:56 np0005540825 augenrules[723]: backlog_wait_time 60000
Dec  1 04:08:56 np0005540825 augenrules[723]: backlog_wait_time_actual 0
Dec  1 04:08:56 np0005540825 augenrules[723]: enabled 1
Dec  1 04:08:56 np0005540825 augenrules[723]: failure 1
Dec  1 04:08:56 np0005540825 augenrules[723]: pid 703
Dec  1 04:08:56 np0005540825 augenrules[723]: rate_limit 0
Dec  1 04:08:56 np0005540825 augenrules[723]: backlog_limit 8192
Dec  1 04:08:56 np0005540825 augenrules[723]: lost 0
Dec  1 04:08:56 np0005540825 augenrules[723]: backlog 0
Dec  1 04:08:56 np0005540825 augenrules[723]: backlog_wait_time 60000
Dec  1 04:08:56 np0005540825 augenrules[723]: backlog_wait_time_actual 0
Dec  1 04:08:56 np0005540825 systemd[1]: Started Security Auditing Service.
Dec  1 04:08:56 np0005540825 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Dec  1 04:08:56 np0005540825 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Dec  1 04:08:57 np0005540825 systemd[1]: Finished Rebuild Hardware Database.
Dec  1 04:08:57 np0005540825 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec  1 04:08:57 np0005540825 systemd[1]: Starting Update is Completed...
Dec  1 04:08:57 np0005540825 systemd[1]: Finished Update is Completed.
Dec  1 04:08:57 np0005540825 systemd-udevd[731]: Using default interface naming scheme 'rhel-9.0'.
Dec  1 04:08:57 np0005540825 systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec  1 04:08:57 np0005540825 systemd[1]: Reached target System Initialization.
Dec  1 04:08:57 np0005540825 systemd[1]: Started dnf makecache --timer.
Dec  1 04:08:57 np0005540825 systemd[1]: Started Daily rotation of log files.
Dec  1 04:08:57 np0005540825 systemd[1]: Started Daily Cleanup of Temporary Directories.
Dec  1 04:08:57 np0005540825 systemd[1]: Reached target Timer Units.
Dec  1 04:08:57 np0005540825 systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec  1 04:08:57 np0005540825 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Dec  1 04:08:57 np0005540825 systemd[1]: Reached target Socket Units.
Dec  1 04:08:57 np0005540825 systemd[1]: Starting D-Bus System Message Bus...
Dec  1 04:08:57 np0005540825 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  1 04:08:57 np0005540825 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Dec  1 04:08:57 np0005540825 systemd[1]: Starting Load Kernel Module configfs...
Dec  1 04:08:57 np0005540825 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  1 04:08:57 np0005540825 systemd[1]: Finished Load Kernel Module configfs.
Dec  1 04:08:57 np0005540825 systemd[1]: Started D-Bus System Message Bus.
Dec  1 04:08:57 np0005540825 systemd-udevd[737]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 04:08:57 np0005540825 systemd[1]: Reached target Basic System.
Dec  1 04:08:57 np0005540825 dbus-broker-lau[764]: Ready
Dec  1 04:08:57 np0005540825 systemd[1]: Starting NTP client/server...
Dec  1 04:08:57 np0005540825 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Dec  1 04:08:57 np0005540825 systemd[1]: Starting Restore /run/initramfs on shutdown...
Dec  1 04:08:57 np0005540825 systemd[1]: Starting IPv4 firewall with iptables...
Dec  1 04:08:57 np0005540825 systemd[1]: Started irqbalance daemon.
Dec  1 04:08:57 np0005540825 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Dec  1 04:08:57 np0005540825 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  1 04:08:57 np0005540825 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  1 04:08:57 np0005540825 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  1 04:08:57 np0005540825 systemd[1]: Reached target sshd-keygen.target.
Dec  1 04:08:57 np0005540825 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Dec  1 04:08:57 np0005540825 systemd[1]: Reached target User and Group Name Lookups.
Dec  1 04:08:57 np0005540825 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Dec  1 04:08:57 np0005540825 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Dec  1 04:08:57 np0005540825 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Dec  1 04:08:57 np0005540825 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Dec  1 04:08:57 np0005540825 systemd[1]: Starting User Login Management...
Dec  1 04:08:57 np0005540825 systemd[1]: Finished Restore /run/initramfs on shutdown.
Dec  1 04:08:57 np0005540825 chronyd[795]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec  1 04:08:57 np0005540825 chronyd[795]: Loaded 0 symmetric keys
Dec  1 04:08:57 np0005540825 chronyd[795]: Using right/UTC timezone to obtain leap second data
Dec  1 04:08:57 np0005540825 chronyd[795]: Loaded seccomp filter (level 2)
Dec  1 04:08:57 np0005540825 systemd[1]: Started NTP client/server.
Dec  1 04:08:57 np0005540825 systemd-logind[789]: Watching system buttons on /dev/input/event0 (Power Button)
Dec  1 04:08:57 np0005540825 systemd-logind[789]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec  1 04:08:57 np0005540825 systemd-logind[789]: New seat seat0.
Dec  1 04:08:57 np0005540825 systemd[1]: Started User Login Management.
Dec  1 04:08:57 np0005540825 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Dec  1 04:08:57 np0005540825 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Dec  1 04:08:57 np0005540825 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Dec  1 04:08:57 np0005540825 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Dec  1 04:08:57 np0005540825 kernel: Console: switching to colour dummy device 80x25
Dec  1 04:08:57 np0005540825 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Dec  1 04:08:57 np0005540825 kernel: [drm] features: -context_init
Dec  1 04:08:57 np0005540825 kernel: [drm] number of scanouts: 1
Dec  1 04:08:57 np0005540825 kernel: [drm] number of cap sets: 0
Dec  1 04:08:57 np0005540825 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Dec  1 04:08:57 np0005540825 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Dec  1 04:08:57 np0005540825 kernel: Console: switching to colour frame buffer device 128x48
Dec  1 04:08:57 np0005540825 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Dec  1 04:08:57 np0005540825 kernel: kvm_amd: TSC scaling supported
Dec  1 04:08:57 np0005540825 kernel: kvm_amd: Nested Virtualization enabled
Dec  1 04:08:57 np0005540825 kernel: kvm_amd: Nested Paging enabled
Dec  1 04:08:57 np0005540825 kernel: kvm_amd: LBR virtualization supported
Dec  1 04:08:57 np0005540825 iptables.init[780]: iptables: Applying firewall rules: [  OK  ]
Dec  1 04:08:57 np0005540825 systemd[1]: Finished IPv4 firewall with iptables.
Dec  1 04:08:57 np0005540825 cloud-init[840]: Cloud-init v. 24.4-7.el9 running 'init-local' at Mon, 01 Dec 2025 09:08:57 +0000. Up 6.50 seconds.
Dec  1 04:08:58 np0005540825 systemd[1]: run-cloud\x2dinit-tmp-tmpwy5epy06.mount: Deactivated successfully.
Dec  1 04:08:58 np0005540825 systemd[1]: Starting Hostname Service...
Dec  1 04:08:58 np0005540825 systemd[1]: Started Hostname Service.
Dec  1 04:08:58 np0005540825 systemd-hostnamed[854]: Hostname set to <np0005540825.novalocal> (static)
Dec  1 04:08:58 np0005540825 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Dec  1 04:08:58 np0005540825 systemd[1]: Reached target Preparation for Network.
Dec  1 04:08:58 np0005540825 systemd[1]: Starting Network Manager...
Dec  1 04:08:58 np0005540825 NetworkManager[858]: <info>  [1764580138.3527] NetworkManager (version 1.54.1-1.el9) is starting... (boot:f3f81c00-6df2-4ea1-97f9-33d871af0070)
Dec  1 04:08:58 np0005540825 NetworkManager[858]: <info>  [1764580138.3532] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  1 04:08:58 np0005540825 NetworkManager[858]: <info>  [1764580138.3592] manager[0x55ee8a9d2080]: monitoring kernel firmware directory '/lib/firmware'.
Dec  1 04:08:58 np0005540825 NetworkManager[858]: <info>  [1764580138.3634] hostname: hostname: using hostnamed
Dec  1 04:08:58 np0005540825 NetworkManager[858]: <info>  [1764580138.3634] hostname: static hostname changed from (none) to "np0005540825.novalocal"
Dec  1 04:08:58 np0005540825 NetworkManager[858]: <info>  [1764580138.3637] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  1 04:08:58 np0005540825 NetworkManager[858]: <info>  [1764580138.3745] manager[0x55ee8a9d2080]: rfkill: Wi-Fi hardware radio set enabled
Dec  1 04:08:58 np0005540825 NetworkManager[858]: <info>  [1764580138.3746] manager[0x55ee8a9d2080]: rfkill: WWAN hardware radio set enabled
Dec  1 04:08:58 np0005540825 NetworkManager[858]: <info>  [1764580138.3786] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  1 04:08:58 np0005540825 NetworkManager[858]: <info>  [1764580138.3787] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  1 04:08:58 np0005540825 NetworkManager[858]: <info>  [1764580138.3787] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  1 04:08:58 np0005540825 NetworkManager[858]: <info>  [1764580138.3788] manager: Networking is enabled by state file
Dec  1 04:08:58 np0005540825 NetworkManager[858]: <info>  [1764580138.3790] settings: Loaded settings plugin: keyfile (internal)
Dec  1 04:08:58 np0005540825 NetworkManager[858]: <info>  [1764580138.3802] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  1 04:08:58 np0005540825 NetworkManager[858]: <info>  [1764580138.3826] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  1 04:08:58 np0005540825 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Dec  1 04:08:58 np0005540825 NetworkManager[858]: <info>  [1764580138.3862] dhcp: init: Using DHCP client 'internal'
Dec  1 04:08:58 np0005540825 NetworkManager[858]: <info>  [1764580138.3865] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  1 04:08:58 np0005540825 NetworkManager[858]: <info>  [1764580138.3879] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 04:08:58 np0005540825 NetworkManager[858]: <info>  [1764580138.3888] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  1 04:08:58 np0005540825 NetworkManager[858]: <info>  [1764580138.3896] device (lo): Activation: starting connection 'lo' (9cf04f40-f2df-4143-8f8e-28f6ca572455)
Dec  1 04:08:58 np0005540825 NetworkManager[858]: <info>  [1764580138.3906] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  1 04:08:58 np0005540825 NetworkManager[858]: <info>  [1764580138.3909] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 04:08:58 np0005540825 NetworkManager[858]: <info>  [1764580138.3962] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  1 04:08:58 np0005540825 NetworkManager[858]: <info>  [1764580138.3966] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  1 04:08:58 np0005540825 NetworkManager[858]: <info>  [1764580138.3969] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  1 04:08:58 np0005540825 NetworkManager[858]: <info>  [1764580138.3971] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  1 04:08:58 np0005540825 NetworkManager[858]: <info>  [1764580138.3974] device (eth0): carrier: link connected
Dec  1 04:08:58 np0005540825 NetworkManager[858]: <info>  [1764580138.3978] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  1 04:08:58 np0005540825 NetworkManager[858]: <info>  [1764580138.3985] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec  1 04:08:58 np0005540825 NetworkManager[858]: <info>  [1764580138.3991] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  1 04:08:58 np0005540825 NetworkManager[858]: <info>  [1764580138.3994] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  1 04:08:58 np0005540825 NetworkManager[858]: <info>  [1764580138.3995] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 04:08:58 np0005540825 NetworkManager[858]: <info>  [1764580138.3998] manager: NetworkManager state is now CONNECTING
Dec  1 04:08:58 np0005540825 NetworkManager[858]: <info>  [1764580138.3999] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 04:08:58 np0005540825 NetworkManager[858]: <info>  [1764580138.4007] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 04:08:58 np0005540825 NetworkManager[858]: <info>  [1764580138.4010] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  1 04:08:58 np0005540825 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  1 04:08:58 np0005540825 systemd[1]: Started Network Manager.
Dec  1 04:08:58 np0005540825 systemd[1]: Reached target Network.
Dec  1 04:08:58 np0005540825 systemd[1]: Starting Network Manager Wait Online...
Dec  1 04:08:58 np0005540825 systemd[1]: Starting GSSAPI Proxy Daemon...
Dec  1 04:08:58 np0005540825 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  1 04:08:58 np0005540825 NetworkManager[858]: <info>  [1764580138.4275] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  1 04:08:58 np0005540825 NetworkManager[858]: <info>  [1764580138.4278] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  1 04:08:58 np0005540825 NetworkManager[858]: <info>  [1764580138.4292] device (lo): Activation: successful, device activated.
Dec  1 04:08:58 np0005540825 systemd[1]: Started GSSAPI Proxy Daemon.
Dec  1 04:08:58 np0005540825 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec  1 04:08:58 np0005540825 systemd[1]: Reached target NFS client services.
Dec  1 04:08:58 np0005540825 systemd[1]: Reached target Preparation for Remote File Systems.
Dec  1 04:08:58 np0005540825 systemd[1]: Reached target Remote File Systems.
Dec  1 04:08:58 np0005540825 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  1 04:08:59 np0005540825 NetworkManager[858]: <info>  [1764580139.0005] dhcp4 (eth0): state changed new lease, address=38.102.83.181
Dec  1 04:08:59 np0005540825 NetworkManager[858]: <info>  [1764580139.0017] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  1 04:08:59 np0005540825 NetworkManager[858]: <info>  [1764580139.0036] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 04:08:59 np0005540825 NetworkManager[858]: <info>  [1764580139.0068] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 04:08:59 np0005540825 NetworkManager[858]: <info>  [1764580139.0069] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 04:08:59 np0005540825 NetworkManager[858]: <info>  [1764580139.0072] manager: NetworkManager state is now CONNECTED_SITE
Dec  1 04:08:59 np0005540825 NetworkManager[858]: <info>  [1764580139.0076] device (eth0): Activation: successful, device activated.
Dec  1 04:08:59 np0005540825 NetworkManager[858]: <info>  [1764580139.0080] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  1 04:08:59 np0005540825 NetworkManager[858]: <info>  [1764580139.0084] manager: startup complete
Dec  1 04:08:59 np0005540825 systemd[1]: Finished Network Manager Wait Online.
Dec  1 04:08:59 np0005540825 systemd[1]: Starting Cloud-init: Network Stage...
Dec  1 04:08:59 np0005540825 cloud-init[922]: Cloud-init v. 24.4-7.el9 running 'init' at Mon, 01 Dec 2025 09:08:59 +0000. Up 8.01 seconds.
Dec  1 04:08:59 np0005540825 cloud-init[922]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Dec  1 04:08:59 np0005540825 cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec  1 04:08:59 np0005540825 cloud-init[922]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Dec  1 04:08:59 np0005540825 cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec  1 04:08:59 np0005540825 cloud-init[922]: ci-info: |  eth0  | True |        38.102.83.181         | 255.255.255.0 | global | fa:16:3e:a3:ae:2d |
Dec  1 04:08:59 np0005540825 cloud-init[922]: ci-info: |  eth0  | True | fe80::f816:3eff:fea3:ae2d/64 |       .       |  link  | fa:16:3e:a3:ae:2d |
Dec  1 04:08:59 np0005540825 cloud-init[922]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Dec  1 04:08:59 np0005540825 cloud-init[922]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Dec  1 04:08:59 np0005540825 cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec  1 04:08:59 np0005540825 cloud-init[922]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Dec  1 04:08:59 np0005540825 cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  1 04:08:59 np0005540825 cloud-init[922]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Dec  1 04:08:59 np0005540825 cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  1 04:08:59 np0005540825 cloud-init[922]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Dec  1 04:08:59 np0005540825 cloud-init[922]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Dec  1 04:08:59 np0005540825 cloud-init[922]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Dec  1 04:08:59 np0005540825 cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  1 04:08:59 np0005540825 cloud-init[922]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Dec  1 04:08:59 np0005540825 cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  1 04:08:59 np0005540825 cloud-init[922]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Dec  1 04:08:59 np0005540825 cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  1 04:08:59 np0005540825 cloud-init[922]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Dec  1 04:08:59 np0005540825 cloud-init[922]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Dec  1 04:08:59 np0005540825 cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  1 04:09:00 np0005540825 cloud-init[922]: Generating public/private rsa key pair.
Dec  1 04:09:00 np0005540825 cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Dec  1 04:09:00 np0005540825 cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Dec  1 04:09:00 np0005540825 cloud-init[922]: The key fingerprint is:
Dec  1 04:09:00 np0005540825 cloud-init[922]: SHA256:7E1jMzT6wcv98wRzx/41U42pvd5oY5HPm+i2osus8Ok root@np0005540825.novalocal
Dec  1 04:09:00 np0005540825 cloud-init[922]: The key's randomart image is:
Dec  1 04:09:00 np0005540825 cloud-init[922]: +---[RSA 3072]----+
Dec  1 04:09:00 np0005540825 cloud-init[922]: |                 |
Dec  1 04:09:00 np0005540825 cloud-init[922]: |                 |
Dec  1 04:09:00 np0005540825 cloud-init[922]: |          o      |
Dec  1 04:09:00 np0005540825 cloud-init[922]: |       . + .   +.|
Dec  1 04:09:00 np0005540825 cloud-init[922]: |        S O   =.*|
Dec  1 04:09:00 np0005540825 cloud-init[922]: |       . * B oo=o|
Dec  1 04:09:00 np0005540825 cloud-init[922]: |     .  . = o .*+|
Dec  1 04:09:00 np0005540825 cloud-init[922]: |      o +  . o*=O|
Dec  1 04:09:00 np0005540825 cloud-init[922]: |      .E.=o +B**=|
Dec  1 04:09:00 np0005540825 cloud-init[922]: +----[SHA256]-----+
Dec  1 04:09:00 np0005540825 cloud-init[922]: Generating public/private ecdsa key pair.
Dec  1 04:09:00 np0005540825 cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Dec  1 04:09:00 np0005540825 cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Dec  1 04:09:00 np0005540825 cloud-init[922]: The key fingerprint is:
Dec  1 04:09:00 np0005540825 cloud-init[922]: SHA256:Cc4sLvAozyu9h3HcudHc2ALQd+rOEiiZRWMh2vIoarM root@np0005540825.novalocal
Dec  1 04:09:00 np0005540825 cloud-init[922]: The key's randomart image is:
Dec  1 04:09:00 np0005540825 cloud-init[922]: +---[ECDSA 256]---+
Dec  1 04:09:00 np0005540825 cloud-init[922]: |  . .o           |
Dec  1 04:09:00 np0005540825 cloud-init[922]: | o .= . . .      |
Dec  1 04:09:00 np0005540825 cloud-init[922]: |o .o o.. o       |
Dec  1 04:09:00 np0005540825 cloud-init[922]: | +  .+....       |
Dec  1 04:09:00 np0005540825 cloud-init[922]: |+ .=.o+*S+       |
Dec  1 04:09:00 np0005540825 cloud-init[922]: |o+=.+.= * o      |
Dec  1 04:09:00 np0005540825 cloud-init[922]: |++o=.  * .       |
Dec  1 04:09:00 np0005540825 cloud-init[922]: |+o=.. o o        |
Dec  1 04:09:00 np0005540825 cloud-init[922]: | E=+   .         |
Dec  1 04:09:00 np0005540825 cloud-init[922]: +----[SHA256]-----+
Dec  1 04:09:00 np0005540825 cloud-init[922]: Generating public/private ed25519 key pair.
Dec  1 04:09:00 np0005540825 cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Dec  1 04:09:00 np0005540825 cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Dec  1 04:09:00 np0005540825 cloud-init[922]: The key fingerprint is:
Dec  1 04:09:00 np0005540825 cloud-init[922]: SHA256:3a7cwf8s9wayz4defQKI77IIoZ6Ui8YZ5pfjkg1+Vag root@np0005540825.novalocal
Dec  1 04:09:00 np0005540825 cloud-init[922]: The key's randomart image is:
Dec  1 04:09:00 np0005540825 cloud-init[922]: +--[ED25519 256]--+
Dec  1 04:09:00 np0005540825 cloud-init[922]: |                 |
Dec  1 04:09:00 np0005540825 cloud-init[922]: |                 |
Dec  1 04:09:00 np0005540825 cloud-init[922]: |      .          |
Dec  1 04:09:00 np0005540825 cloud-init[922]: |     . . o o     |
Dec  1 04:09:00 np0005540825 cloud-init[922]: |    o . S o o    |
Dec  1 04:09:00 np0005540825 cloud-init[922]: | + E o   . o... .|
Dec  1 04:09:00 np0005540825 cloud-init[922]: |= X +     . +o.o+|
Dec  1 04:09:00 np0005540825 cloud-init[922]: | % O . ..o o.+o+=|
Dec  1 04:09:00 np0005540825 cloud-init[922]: |o O.. . .o+ .o=*=|
Dec  1 04:09:00 np0005540825 cloud-init[922]: +----[SHA256]-----+
Dec  1 04:09:00 np0005540825 sm-notify[1005]: Version 2.5.4 starting
Dec  1 04:09:00 np0005540825 systemd[1]: Finished Cloud-init: Network Stage.
Dec  1 04:09:00 np0005540825 systemd[1]: Reached target Cloud-config availability.
Dec  1 04:09:00 np0005540825 systemd[1]: Reached target Network is Online.
Dec  1 04:09:00 np0005540825 systemd[1]: Starting Cloud-init: Config Stage...
Dec  1 04:09:00 np0005540825 systemd[1]: Starting Crash recovery kernel arming...
Dec  1 04:09:00 np0005540825 systemd[1]: Starting Notify NFS peers of a restart...
Dec  1 04:09:00 np0005540825 systemd[1]: Starting System Logging Service...
Dec  1 04:09:00 np0005540825 systemd[1]: Starting OpenSSH server daemon...
Dec  1 04:09:00 np0005540825 systemd[1]: Starting Permit User Sessions...
Dec  1 04:09:00 np0005540825 systemd[1]: Started Notify NFS peers of a restart.
Dec  1 04:09:00 np0005540825 systemd[1]: Started OpenSSH server daemon.
Dec  1 04:09:00 np0005540825 systemd[1]: Finished Permit User Sessions.
Dec  1 04:09:00 np0005540825 systemd[1]: Started Command Scheduler.
Dec  1 04:09:00 np0005540825 systemd[1]: Started Getty on tty1.
Dec  1 04:09:00 np0005540825 rsyslogd[1006]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1006" x-info="https://www.rsyslog.com"] start
Dec  1 04:09:00 np0005540825 rsyslogd[1006]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Dec  1 04:09:00 np0005540825 systemd[1]: Started Serial Getty on ttyS0.
Dec  1 04:09:00 np0005540825 systemd[1]: Reached target Login Prompts.
Dec  1 04:09:00 np0005540825 systemd[1]: Started System Logging Service.
Dec  1 04:09:00 np0005540825 systemd[1]: Reached target Multi-User System.
Dec  1 04:09:00 np0005540825 systemd[1]: Starting Record Runlevel Change in UTMP...
Dec  1 04:09:00 np0005540825 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec  1 04:09:00 np0005540825 systemd[1]: Finished Record Runlevel Change in UTMP.
Dec  1 04:09:00 np0005540825 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 04:09:00 np0005540825 kdumpctl[1019]: kdump: No kdump initial ramdisk found.
Dec  1 04:09:00 np0005540825 kdumpctl[1019]: kdump: Rebuilding /boot/initramfs-5.14.0-642.el9.x86_64kdump.img
Dec  1 04:09:00 np0005540825 cloud-init[1124]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Mon, 01 Dec 2025 09:09:00 +0000. Up 9.54 seconds.
Dec  1 04:09:00 np0005540825 systemd[1]: Finished Cloud-init: Config Stage.
Dec  1 04:09:01 np0005540825 systemd[1]: Starting Cloud-init: Final Stage...
Dec  1 04:09:01 np0005540825 dracut[1285]: dracut-057-102.git20250818.el9
Dec  1 04:09:01 np0005540825 cloud-init[1303]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Mon, 01 Dec 2025 09:09:01 +0000. Up 9.99 seconds.
Dec  1 04:09:01 np0005540825 cloud-init[1307]: #############################################################
Dec  1 04:09:01 np0005540825 cloud-init[1308]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Dec  1 04:09:01 np0005540825 cloud-init[1312]: 256 SHA256:Cc4sLvAozyu9h3HcudHc2ALQd+rOEiiZRWMh2vIoarM root@np0005540825.novalocal (ECDSA)
Dec  1 04:09:01 np0005540825 dracut[1287]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-642.el9.x86_64kdump.img 5.14.0-642.el9.x86_64
Dec  1 04:09:01 np0005540825 cloud-init[1320]: 256 SHA256:3a7cwf8s9wayz4defQKI77IIoZ6Ui8YZ5pfjkg1+Vag root@np0005540825.novalocal (ED25519)
Dec  1 04:09:01 np0005540825 cloud-init[1327]: 3072 SHA256:7E1jMzT6wcv98wRzx/41U42pvd5oY5HPm+i2osus8Ok root@np0005540825.novalocal (RSA)
Dec  1 04:09:01 np0005540825 cloud-init[1330]: -----END SSH HOST KEY FINGERPRINTS-----
Dec  1 04:09:01 np0005540825 cloud-init[1334]: #############################################################
Dec  1 04:09:01 np0005540825 cloud-init[1303]: Cloud-init v. 24.4-7.el9 finished at Mon, 01 Dec 2025 09:09:01 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.22 seconds
Dec  1 04:09:01 np0005540825 systemd[1]: Finished Cloud-init: Final Stage.
Dec  1 04:09:01 np0005540825 systemd[1]: Reached target Cloud-init target.
Dec  1 04:09:02 np0005540825 dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Dec  1 04:09:02 np0005540825 dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Dec  1 04:09:02 np0005540825 dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Dec  1 04:09:02 np0005540825 dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec  1 04:09:02 np0005540825 dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec  1 04:09:02 np0005540825 dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec  1 04:09:02 np0005540825 dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec  1 04:09:02 np0005540825 dracut[1287]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec  1 04:09:02 np0005540825 dracut[1287]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec  1 04:09:02 np0005540825 dracut[1287]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec  1 04:09:02 np0005540825 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec  1 04:09:02 np0005540825 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec  1 04:09:02 np0005540825 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec  1 04:09:02 np0005540825 dracut[1287]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec  1 04:09:02 np0005540825 dracut[1287]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec  1 04:09:02 np0005540825 dracut[1287]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec  1 04:09:02 np0005540825 dracut[1287]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec  1 04:09:02 np0005540825 dracut[1287]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec  1 04:09:02 np0005540825 dracut[1287]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec  1 04:09:02 np0005540825 dracut[1287]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec  1 04:09:02 np0005540825 dracut[1287]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec  1 04:09:02 np0005540825 dracut[1287]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec  1 04:09:02 np0005540825 dracut[1287]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec  1 04:09:02 np0005540825 dracut[1287]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec  1 04:09:02 np0005540825 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec  1 04:09:02 np0005540825 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec  1 04:09:02 np0005540825 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec  1 04:09:02 np0005540825 dracut[1287]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec  1 04:09:02 np0005540825 dracut[1287]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Dec  1 04:09:02 np0005540825 dracut[1287]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec  1 04:09:02 np0005540825 dracut[1287]: memstrack is not available
Dec  1 04:09:02 np0005540825 dracut[1287]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec  1 04:09:02 np0005540825 dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec  1 04:09:02 np0005540825 dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec  1 04:09:02 np0005540825 dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec  1 04:09:02 np0005540825 dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec  1 04:09:02 np0005540825 dracut[1287]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec  1 04:09:02 np0005540825 dracut[1287]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec  1 04:09:03 np0005540825 dracut[1287]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec  1 04:09:03 np0005540825 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec  1 04:09:03 np0005540825 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec  1 04:09:03 np0005540825 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec  1 04:09:03 np0005540825 dracut[1287]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec  1 04:09:03 np0005540825 dracut[1287]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec  1 04:09:03 np0005540825 dracut[1287]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec  1 04:09:03 np0005540825 dracut[1287]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec  1 04:09:03 np0005540825 dracut[1287]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec  1 04:09:03 np0005540825 dracut[1287]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec  1 04:09:03 np0005540825 dracut[1287]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec  1 04:09:03 np0005540825 dracut[1287]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec  1 04:09:03 np0005540825 dracut[1287]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec  1 04:09:03 np0005540825 dracut[1287]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec  1 04:09:03 np0005540825 dracut[1287]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec  1 04:09:03 np0005540825 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec  1 04:09:03 np0005540825 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec  1 04:09:03 np0005540825 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec  1 04:09:03 np0005540825 dracut[1287]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec  1 04:09:03 np0005540825 dracut[1287]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec  1 04:09:03 np0005540825 dracut[1287]: memstrack is not available
Dec  1 04:09:03 np0005540825 dracut[1287]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec  1 04:09:03 np0005540825 dracut[1287]: *** Including module: systemd ***
Dec  1 04:09:03 np0005540825 dracut[1287]: *** Including module: fips ***
Dec  1 04:09:03 np0005540825 chronyd[795]: Selected source 174.138.193.90 (2.centos.pool.ntp.org)
Dec  1 04:09:03 np0005540825 chronyd[795]: System clock TAI offset set to 37 seconds
Dec  1 04:09:04 np0005540825 dracut[1287]: *** Including module: systemd-initrd ***
Dec  1 04:09:04 np0005540825 dracut[1287]: *** Including module: i18n ***
Dec  1 04:09:04 np0005540825 dracut[1287]: *** Including module: drm ***
Dec  1 04:09:04 np0005540825 dracut[1287]: *** Including module: prefixdevname ***
Dec  1 04:09:04 np0005540825 dracut[1287]: *** Including module: kernel-modules ***
Dec  1 04:09:05 np0005540825 kernel: block vda: the capability attribute has been deprecated.
Dec  1 04:09:05 np0005540825 dracut[1287]: *** Including module: kernel-modules-extra ***
Dec  1 04:09:05 np0005540825 dracut[1287]: *** Including module: qemu ***
Dec  1 04:09:05 np0005540825 dracut[1287]: *** Including module: fstab-sys ***
Dec  1 04:09:05 np0005540825 dracut[1287]: *** Including module: rootfs-block ***
Dec  1 04:09:05 np0005540825 dracut[1287]: *** Including module: terminfo ***
Dec  1 04:09:05 np0005540825 dracut[1287]: *** Including module: udev-rules ***
Dec  1 04:09:06 np0005540825 dracut[1287]: Skipping udev rule: 91-permissions.rules
Dec  1 04:09:06 np0005540825 dracut[1287]: Skipping udev rule: 80-drivers-modprobe.rules
Dec  1 04:09:06 np0005540825 dracut[1287]: *** Including module: virtiofs ***
Dec  1 04:09:06 np0005540825 dracut[1287]: *** Including module: dracut-systemd ***
Dec  1 04:09:06 np0005540825 dracut[1287]: *** Including module: usrmount ***
Dec  1 04:09:06 np0005540825 dracut[1287]: *** Including module: base ***
Dec  1 04:09:06 np0005540825 dracut[1287]: *** Including module: fs-lib ***
Dec  1 04:09:06 np0005540825 dracut[1287]: *** Including module: kdumpbase ***
Dec  1 04:09:07 np0005540825 dracut[1287]: *** Including module: microcode_ctl-fw_dir_override ***
Dec  1 04:09:07 np0005540825 dracut[1287]:  microcode_ctl module: mangling fw_dir
Dec  1 04:09:07 np0005540825 dracut[1287]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Dec  1 04:09:07 np0005540825 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Dec  1 04:09:07 np0005540825 irqbalance[785]: Cannot change IRQ 25 affinity: Operation not permitted
Dec  1 04:09:07 np0005540825 irqbalance[785]: IRQ 25 affinity is now unmanaged
Dec  1 04:09:07 np0005540825 irqbalance[785]: Cannot change IRQ 31 affinity: Operation not permitted
Dec  1 04:09:07 np0005540825 irqbalance[785]: IRQ 31 affinity is now unmanaged
Dec  1 04:09:07 np0005540825 irqbalance[785]: Cannot change IRQ 28 affinity: Operation not permitted
Dec  1 04:09:07 np0005540825 irqbalance[785]: IRQ 28 affinity is now unmanaged
Dec  1 04:09:07 np0005540825 irqbalance[785]: Cannot change IRQ 32 affinity: Operation not permitted
Dec  1 04:09:07 np0005540825 irqbalance[785]: IRQ 32 affinity is now unmanaged
Dec  1 04:09:07 np0005540825 irqbalance[785]: Cannot change IRQ 30 affinity: Operation not permitted
Dec  1 04:09:07 np0005540825 irqbalance[785]: IRQ 30 affinity is now unmanaged
Dec  1 04:09:07 np0005540825 irqbalance[785]: Cannot change IRQ 29 affinity: Operation not permitted
Dec  1 04:09:07 np0005540825 irqbalance[785]: IRQ 29 affinity is now unmanaged
Dec  1 04:09:07 np0005540825 dracut[1287]:    microcode_ctl: configuration "intel" is ignored
Dec  1 04:09:07 np0005540825 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Dec  1 04:09:07 np0005540825 dracut[1287]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Dec  1 04:09:07 np0005540825 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Dec  1 04:09:07 np0005540825 dracut[1287]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Dec  1 04:09:07 np0005540825 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Dec  1 04:09:07 np0005540825 dracut[1287]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Dec  1 04:09:07 np0005540825 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Dec  1 04:09:07 np0005540825 dracut[1287]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Dec  1 04:09:07 np0005540825 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Dec  1 04:09:07 np0005540825 dracut[1287]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Dec  1 04:09:07 np0005540825 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Dec  1 04:09:07 np0005540825 dracut[1287]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Dec  1 04:09:07 np0005540825 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Dec  1 04:09:07 np0005540825 dracut[1287]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Dec  1 04:09:07 np0005540825 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Dec  1 04:09:07 np0005540825 dracut[1287]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Dec  1 04:09:07 np0005540825 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Dec  1 04:09:07 np0005540825 dracut[1287]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Dec  1 04:09:07 np0005540825 dracut[1287]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Dec  1 04:09:07 np0005540825 dracut[1287]: *** Including module: openssl ***
Dec  1 04:09:07 np0005540825 dracut[1287]: *** Including module: shutdown ***
Dec  1 04:09:07 np0005540825 dracut[1287]: *** Including module: squash ***
Dec  1 04:09:07 np0005540825 dracut[1287]: *** Including modules done ***
Dec  1 04:09:07 np0005540825 dracut[1287]: *** Installing kernel module dependencies ***
Dec  1 04:09:08 np0005540825 dracut[1287]: *** Installing kernel module dependencies done ***
Dec  1 04:09:08 np0005540825 dracut[1287]: *** Resolving executable dependencies ***
Dec  1 04:09:09 np0005540825 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  1 04:09:10 np0005540825 dracut[1287]: *** Resolving executable dependencies done ***
Dec  1 04:09:10 np0005540825 dracut[1287]: *** Generating early-microcode cpio image ***
Dec  1 04:09:10 np0005540825 dracut[1287]: *** Store current command line parameters ***
Dec  1 04:09:10 np0005540825 dracut[1287]: Stored kernel commandline:
Dec  1 04:09:10 np0005540825 dracut[1287]: No dracut internal kernel commandline stored in the initramfs
Dec  1 04:09:10 np0005540825 dracut[1287]: *** Install squash loader ***
Dec  1 04:09:11 np0005540825 dracut[1287]: *** Squashing the files inside the initramfs ***
Dec  1 04:09:13 np0005540825 dracut[1287]: *** Squashing the files inside the initramfs done ***
Dec  1 04:09:13 np0005540825 dracut[1287]: *** Creating image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' ***
Dec  1 04:09:13 np0005540825 dracut[1287]: *** Hardlinking files ***
Dec  1 04:09:13 np0005540825 dracut[1287]: *** Hardlinking files done ***
Dec  1 04:09:13 np0005540825 dracut[1287]: *** Creating initramfs image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' done ***
Dec  1 04:09:14 np0005540825 kdumpctl[1019]: kdump: kexec: loaded kdump kernel
Dec  1 04:09:14 np0005540825 kdumpctl[1019]: kdump: Starting kdump: [OK]
Dec  1 04:09:14 np0005540825 systemd[1]: Finished Crash recovery kernel arming.
Dec  1 04:09:14 np0005540825 systemd[1]: Startup finished in 1.684s (kernel) + 2.926s (initrd) + 18.165s (userspace) = 22.776s.
Dec  1 04:09:17 np0005540825 irqbalance[785]: Cannot change IRQ 27 affinity: Operation not permitted
Dec  1 04:09:17 np0005540825 irqbalance[785]: IRQ 27 affinity is now unmanaged
Dec  1 04:09:28 np0005540825 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  1 04:09:31 np0005540825 systemd[1]: Created slice User Slice of UID 1000.
Dec  1 04:09:31 np0005540825 systemd[1]: Starting User Runtime Directory /run/user/1000...
Dec  1 04:09:31 np0005540825 systemd-logind[789]: New session 1 of user zuul.
Dec  1 04:09:31 np0005540825 systemd[1]: Finished User Runtime Directory /run/user/1000.
Dec  1 04:09:31 np0005540825 systemd[1]: Starting User Manager for UID 1000...
Dec  1 04:09:31 np0005540825 systemd[4303]: Queued start job for default target Main User Target.
Dec  1 04:09:31 np0005540825 systemd[4303]: Created slice User Application Slice.
Dec  1 04:09:31 np0005540825 systemd[4303]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  1 04:09:31 np0005540825 systemd[4303]: Started Daily Cleanup of User's Temporary Directories.
Dec  1 04:09:31 np0005540825 systemd[4303]: Reached target Paths.
Dec  1 04:09:31 np0005540825 systemd[4303]: Reached target Timers.
Dec  1 04:09:31 np0005540825 systemd[4303]: Starting D-Bus User Message Bus Socket...
Dec  1 04:09:31 np0005540825 systemd[4303]: Starting Create User's Volatile Files and Directories...
Dec  1 04:09:31 np0005540825 systemd[4303]: Listening on D-Bus User Message Bus Socket.
Dec  1 04:09:31 np0005540825 systemd[4303]: Reached target Sockets.
Dec  1 04:09:31 np0005540825 systemd[4303]: Finished Create User's Volatile Files and Directories.
Dec  1 04:09:31 np0005540825 systemd[4303]: Reached target Basic System.
Dec  1 04:09:31 np0005540825 systemd[4303]: Reached target Main User Target.
Dec  1 04:09:31 np0005540825 systemd[4303]: Startup finished in 150ms.
Dec  1 04:09:31 np0005540825 systemd[1]: Started User Manager for UID 1000.
Dec  1 04:09:31 np0005540825 systemd[1]: Started Session 1 of User zuul.
Dec  1 04:09:32 np0005540825 python3[4385]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:09:34 np0005540825 python3[4413]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:09:44 np0005540825 python3[4471]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:09:45 np0005540825 python3[4511]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Dec  1 04:09:47 np0005540825 python3[4537]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCs83Me/XJ93JONH+A3ys3BwT4zj02WAeI+PLa+4ictmx5jo+8RBm+8bQesnDGHtSEP3xHjam8Fwfo48sUz5kG1CEXeLWH7xBEXZQ+pidesIq17dWuB2YicfBCHGhZlqb9l/fISdA7PnN5BsCCyr5hQUlvwUPLq0dzE02EgJGcgUqI2ytoS8AvmZ5RX7c4IqGNOi3dFOny3uCDUlNZf/m10t5Eqaq53DNvn55ZT7HmuZuq1QSut2qopHMOrbqUIx17TPb+KiAJG5h8+CV0pJKLq1fSsJaTqR/MZTXsPF5oJHMT5BqnKmRCBNJyY+ko1jZA3a2jF3MqcxIxwgndHOIWitGlByPkFLlWfLV78+yskN9w1nWzxFvEhkCexTCcqU8TmYGBBjKU4l0icf9POdHjr9cZVQmRYdIveeEtZJS0R8S9Tx1uYEuLAXYurVEYBQXuNDw4iQV4pSabQVesX8t9KwUTkxMg2kUXIjvBcHSEiT6wtG+W/j0byNv0sj6FU2EM= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 04:09:47 np0005540825 python3[4561]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:09:48 np0005540825 python3[4660]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 04:09:48 np0005540825 python3[4731]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764580188.1157327-251-277008742459174/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=0c4295e5299f438c97cd17e88c30c039_id_rsa follow=False checksum=c0f0a3fd8bd6e06ffcd4372a522626913bfa295a backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:09:49 np0005540825 python3[4854]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 04:09:49 np0005540825 python3[4925]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764580188.956457-306-177751612332353/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=0c4295e5299f438c97cd17e88c30c039_id_rsa.pub follow=False checksum=0bbaabac56f17c62b907e9f050ef8c82d5faceb9 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:09:51 np0005540825 python3[4973]: ansible-ping Invoked with data=pong
Dec  1 04:09:52 np0005540825 python3[4997]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:09:54 np0005540825 python3[5055]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Dec  1 04:09:55 np0005540825 python3[5087]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:09:55 np0005540825 python3[5111]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:09:56 np0005540825 python3[5135]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:09:56 np0005540825 python3[5159]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:09:56 np0005540825 python3[5183]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:09:57 np0005540825 python3[5207]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:09:58 np0005540825 python3[5233]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:09:59 np0005540825 python3[5311]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 04:10:00 np0005540825 python3[5384]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764580199.1548417-31-19090412849720/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:10:00 np0005540825 python3[5432]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 04:10:00 np0005540825 python3[5456]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 04:10:01 np0005540825 python3[5480]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 04:10:01 np0005540825 python3[5504]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 04:10:01 np0005540825 python3[5528]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 04:10:01 np0005540825 python3[5552]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 04:10:02 np0005540825 python3[5576]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 04:10:02 np0005540825 python3[5600]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 04:10:02 np0005540825 python3[5624]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 04:10:02 np0005540825 python3[5648]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 04:10:03 np0005540825 python3[5672]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 04:10:03 np0005540825 python3[5696]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 04:10:03 np0005540825 python3[5720]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 04:10:04 np0005540825 python3[5744]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 04:10:04 np0005540825 python3[5768]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 04:10:04 np0005540825 python3[5792]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 04:10:04 np0005540825 python3[5816]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 04:10:05 np0005540825 python3[5840]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 04:10:05 np0005540825 python3[5864]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 04:10:05 np0005540825 python3[5888]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 04:10:05 np0005540825 python3[5912]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 04:10:06 np0005540825 python3[5936]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 04:10:06 np0005540825 python3[5960]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 04:10:06 np0005540825 python3[5984]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 04:10:07 np0005540825 python3[6008]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 04:10:07 np0005540825 python3[6032]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 04:10:11 np0005540825 python3[6058]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec  1 04:10:11 np0005540825 systemd[1]: Starting Time & Date Service...
Dec  1 04:10:11 np0005540825 systemd[1]: Started Time & Date Service.
Dec  1 04:10:11 np0005540825 systemd-timedated[6060]: Changed time zone to 'UTC' (UTC).
Dec  1 04:10:11 np0005540825 python3[6089]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:10:12 np0005540825 python3[6165]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 04:10:12 np0005540825 python3[6236]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764580211.8072374-251-216798295785116/source _original_basename=tmpmfxe0k2w follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:10:12 np0005540825 python3[6336]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 04:10:13 np0005540825 python3[6407]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764580212.7108316-301-186128726003531/source _original_basename=tmppewwpqcn follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:10:14 np0005540825 python3[6509]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 04:10:14 np0005540825 python3[6582]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764580213.8605952-381-263743495248861/source _original_basename=tmp6yclyo4x follow=False checksum=342f501e01c1098669fc1f1874ec75e7ad7dd27a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:10:15 np0005540825 python3[6630]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:10:15 np0005540825 python3[6656]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:10:15 np0005540825 python3[6736]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 04:10:16 np0005540825 python3[6809]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764580215.4823039-451-27898966889022/source _original_basename=tmpzqu4qfzg follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:10:17 np0005540825 python3[6860]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-bfee-2c1a-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:10:17 np0005540825 python3[6888]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163efc-24cc-bfee-2c1a-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Dec  1 04:10:19 np0005540825 python3[6916]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:10:39 np0005540825 python3[6942]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:10:41 np0005540825 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  1 04:11:21 np0005540825 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec  1 04:11:21 np0005540825 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Dec  1 04:11:21 np0005540825 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Dec  1 04:11:21 np0005540825 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Dec  1 04:11:21 np0005540825 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Dec  1 04:11:21 np0005540825 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Dec  1 04:11:21 np0005540825 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Dec  1 04:11:21 np0005540825 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Dec  1 04:11:21 np0005540825 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Dec  1 04:11:21 np0005540825 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Dec  1 04:11:21 np0005540825 NetworkManager[858]: <info>  [1764580281.8662] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  1 04:11:21 np0005540825 systemd-udevd[6945]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 04:11:21 np0005540825 NetworkManager[858]: <info>  [1764580281.8850] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 04:11:21 np0005540825 NetworkManager[858]: <info>  [1764580281.8873] settings: (eth1): created default wired connection 'Wired connection 1'
Dec  1 04:11:21 np0005540825 NetworkManager[858]: <info>  [1764580281.8877] device (eth1): carrier: link connected
Dec  1 04:11:21 np0005540825 NetworkManager[858]: <info>  [1764580281.8878] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec  1 04:11:21 np0005540825 NetworkManager[858]: <info>  [1764580281.8883] policy: auto-activating connection 'Wired connection 1' (4c8aaa08-05a9-3821-9575-0ca27c8b2493)
Dec  1 04:11:21 np0005540825 NetworkManager[858]: <info>  [1764580281.8886] device (eth1): Activation: starting connection 'Wired connection 1' (4c8aaa08-05a9-3821-9575-0ca27c8b2493)
Dec  1 04:11:21 np0005540825 NetworkManager[858]: <info>  [1764580281.8887] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 04:11:21 np0005540825 NetworkManager[858]: <info>  [1764580281.8889] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 04:11:21 np0005540825 NetworkManager[858]: <info>  [1764580281.8893] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 04:11:21 np0005540825 NetworkManager[858]: <info>  [1764580281.8896] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  1 04:11:22 np0005540825 python3[6972]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-4d84-78d9-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:11:32 np0005540825 python3[7052]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 04:11:33 np0005540825 python3[7125]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764580292.577472-104-152792238947807/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=4bfab336d30b04d31d9473d6e0ddb3e054ca0e11 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:11:34 np0005540825 python3[7175]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 04:11:34 np0005540825 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec  1 04:11:34 np0005540825 systemd[1]: Stopped Network Manager Wait Online.
Dec  1 04:11:34 np0005540825 systemd[1]: Stopping Network Manager Wait Online...
Dec  1 04:11:34 np0005540825 NetworkManager[858]: <info>  [1764580294.1763] caught SIGTERM, shutting down normally.
Dec  1 04:11:34 np0005540825 systemd[1]: Stopping Network Manager...
Dec  1 04:11:34 np0005540825 NetworkManager[858]: <info>  [1764580294.1770] dhcp4 (eth0): canceled DHCP transaction
Dec  1 04:11:34 np0005540825 NetworkManager[858]: <info>  [1764580294.1770] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  1 04:11:34 np0005540825 NetworkManager[858]: <info>  [1764580294.1770] dhcp4 (eth0): state changed no lease
Dec  1 04:11:34 np0005540825 NetworkManager[858]: <info>  [1764580294.1772] manager: NetworkManager state is now CONNECTING
Dec  1 04:11:34 np0005540825 NetworkManager[858]: <info>  [1764580294.1833] dhcp4 (eth1): canceled DHCP transaction
Dec  1 04:11:34 np0005540825 NetworkManager[858]: <info>  [1764580294.1834] dhcp4 (eth1): state changed no lease
Dec  1 04:11:34 np0005540825 NetworkManager[858]: <info>  [1764580294.1900] exiting (success)
Dec  1 04:11:34 np0005540825 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  1 04:11:34 np0005540825 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  1 04:11:34 np0005540825 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec  1 04:11:34 np0005540825 systemd[1]: Stopped Network Manager.
Dec  1 04:11:34 np0005540825 systemd[1]: NetworkManager.service: Consumed 1.023s CPU time, 9.9M memory peak.
Dec  1 04:11:34 np0005540825 systemd[1]: Starting Network Manager...
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.2459] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:f3f81c00-6df2-4ea1-97f9-33d871af0070)
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.2462] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.2511] manager[0x556b9c651070]: monitoring kernel firmware directory '/lib/firmware'.
Dec  1 04:11:34 np0005540825 systemd[1]: Starting Hostname Service...
Dec  1 04:11:34 np0005540825 systemd[1]: Started Hostname Service.
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3403] hostname: hostname: using hostnamed
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3404] hostname: static hostname changed from (none) to "np0005540825.novalocal"
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3408] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3413] manager[0x556b9c651070]: rfkill: Wi-Fi hardware radio set enabled
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3413] manager[0x556b9c651070]: rfkill: WWAN hardware radio set enabled
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3437] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3437] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3438] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3438] manager: Networking is enabled by state file
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3441] settings: Loaded settings plugin: keyfile (internal)
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3444] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3469] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3478] dhcp: init: Using DHCP client 'internal'
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3480] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3486] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3491] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3497] device (lo): Activation: starting connection 'lo' (9cf04f40-f2df-4143-8f8e-28f6ca572455)
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3502] device (eth0): carrier: link connected
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3505] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3509] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3509] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3515] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3520] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3525] device (eth1): carrier: link connected
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3528] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3532] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (4c8aaa08-05a9-3821-9575-0ca27c8b2493) (indicated)
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3532] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3537] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3542] device (eth1): Activation: starting connection 'Wired connection 1' (4c8aaa08-05a9-3821-9575-0ca27c8b2493)
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3548] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  1 04:11:34 np0005540825 systemd[1]: Started Network Manager.
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3551] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3553] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3555] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3559] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3562] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3564] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3566] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3568] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3573] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3575] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3582] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3585] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3600] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3601] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3613] device (lo): Activation: successful, device activated.
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3618] dhcp4 (eth0): state changed new lease, address=38.102.83.181
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3622] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3690] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  1 04:11:34 np0005540825 systemd[1]: Starting Network Manager Wait Online...
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3713] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3715] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3718] manager: NetworkManager state is now CONNECTED_SITE
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3721] device (eth0): Activation: successful, device activated.
Dec  1 04:11:34 np0005540825 NetworkManager[7187]: <info>  [1764580294.3725] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  1 04:11:34 np0005540825 python3[7259]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-4d84-78d9-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:11:44 np0005540825 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  1 04:12:04 np0005540825 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  1 04:12:19 np0005540825 NetworkManager[7187]: <info>  [1764580339.3112] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  1 04:12:19 np0005540825 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  1 04:12:19 np0005540825 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  1 04:12:19 np0005540825 NetworkManager[7187]: <info>  [1764580339.3456] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  1 04:12:19 np0005540825 NetworkManager[7187]: <info>  [1764580339.3459] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  1 04:12:19 np0005540825 NetworkManager[7187]: <info>  [1764580339.3468] device (eth1): Activation: successful, device activated.
Dec  1 04:12:19 np0005540825 NetworkManager[7187]: <info>  [1764580339.3473] manager: startup complete
Dec  1 04:12:19 np0005540825 NetworkManager[7187]: <info>  [1764580339.3477] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Dec  1 04:12:19 np0005540825 NetworkManager[7187]: <warn>  [1764580339.3481] device (eth1): Activation: failed for connection 'Wired connection 1'
Dec  1 04:12:19 np0005540825 NetworkManager[7187]: <info>  [1764580339.3500] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Dec  1 04:12:19 np0005540825 systemd[1]: Finished Network Manager Wait Online.
Dec  1 04:12:19 np0005540825 NetworkManager[7187]: <info>  [1764580339.3553] dhcp4 (eth1): canceled DHCP transaction
Dec  1 04:12:19 np0005540825 NetworkManager[7187]: <info>  [1764580339.3553] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  1 04:12:19 np0005540825 NetworkManager[7187]: <info>  [1764580339.3554] dhcp4 (eth1): state changed no lease
Dec  1 04:12:19 np0005540825 NetworkManager[7187]: <info>  [1764580339.3566] policy: auto-activating connection 'ci-private-network' (cf876690-0410-53d6-9ecb-0fe69a303d1c)
Dec  1 04:12:19 np0005540825 NetworkManager[7187]: <info>  [1764580339.3569] device (eth1): Activation: starting connection 'ci-private-network' (cf876690-0410-53d6-9ecb-0fe69a303d1c)
Dec  1 04:12:19 np0005540825 NetworkManager[7187]: <info>  [1764580339.3570] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 04:12:19 np0005540825 NetworkManager[7187]: <info>  [1764580339.3572] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 04:12:19 np0005540825 NetworkManager[7187]: <info>  [1764580339.3577] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 04:12:19 np0005540825 NetworkManager[7187]: <info>  [1764580339.3584] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 04:12:19 np0005540825 NetworkManager[7187]: <info>  [1764580339.3635] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 04:12:19 np0005540825 NetworkManager[7187]: <info>  [1764580339.3636] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 04:12:19 np0005540825 NetworkManager[7187]: <info>  [1764580339.3642] device (eth1): Activation: successful, device activated.
Dec  1 04:12:29 np0005540825 systemd[4303]: Starting Mark boot as successful...
Dec  1 04:12:29 np0005540825 systemd[4303]: Finished Mark boot as successful.
Dec  1 04:12:29 np0005540825 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  1 04:12:34 np0005540825 systemd-logind[789]: Session 1 logged out. Waiting for processes to exit.
Dec  1 04:13:33 np0005540825 systemd-logind[789]: New session 3 of user zuul.
Dec  1 04:13:33 np0005540825 systemd[1]: Started Session 3 of User zuul.
Dec  1 04:13:33 np0005540825 python3[7369]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 04:13:33 np0005540825 python3[7442]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764580413.20346-373-216009886340058/source _original_basename=tmpr6chpvnl follow=False checksum=978dba8c6f7bc0ac5b14f81009c6504f60a75fb7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:13:38 np0005540825 systemd[1]: session-3.scope: Deactivated successfully.
Dec  1 04:13:38 np0005540825 systemd-logind[789]: Session 3 logged out. Waiting for processes to exit.
Dec  1 04:13:38 np0005540825 systemd-logind[789]: Removed session 3.
Dec  1 04:15:29 np0005540825 systemd[4303]: Created slice User Background Tasks Slice.
Dec  1 04:15:29 np0005540825 systemd[4303]: Starting Cleanup of User's Temporary Files and Directories...
Dec  1 04:15:29 np0005540825 systemd[4303]: Finished Cleanup of User's Temporary Files and Directories.
Dec  1 04:19:32 np0005540825 systemd-logind[789]: New session 4 of user zuul.
Dec  1 04:19:32 np0005540825 systemd[1]: Started Session 4 of User zuul.
Dec  1 04:19:32 np0005540825 python3[7503]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163efc-24cc-6b80-459f-000000001cdc-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:19:33 np0005540825 python3[7532]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:19:33 np0005540825 python3[7558]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:19:34 np0005540825 python3[7584]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:19:34 np0005540825 python3[7610]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:19:35 np0005540825 python3[7636]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:19:35 np0005540825 python3[7714]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 04:19:36 np0005540825 python3[7787]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764580775.7211604-516-66821476041722/source _original_basename=tmpnf_nkf0e follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:19:37 np0005540825 python3[7837]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 04:19:38 np0005540825 systemd[1]: Reloading.
Dec  1 04:19:38 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:19:39 np0005540825 python3[7892]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Dec  1 04:19:40 np0005540825 python3[7918]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:19:40 np0005540825 python3[7946]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:19:40 np0005540825 python3[7974]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:19:41 np0005540825 python3[8002]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:19:41 np0005540825 python3[8029]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163efc-24cc-6b80-459f-000000001ce3-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:19:42 np0005540825 python3[8059]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  1 04:19:44 np0005540825 systemd[1]: session-4.scope: Deactivated successfully.
Dec  1 04:19:44 np0005540825 systemd[1]: session-4.scope: Consumed 4.121s CPU time.
Dec  1 04:19:44 np0005540825 systemd-logind[789]: Session 4 logged out. Waiting for processes to exit.
Dec  1 04:19:44 np0005540825 systemd-logind[789]: Removed session 4.
Dec  1 04:19:46 np0005540825 systemd-logind[789]: New session 5 of user zuul.
Dec  1 04:19:46 np0005540825 systemd[1]: Started Session 5 of User zuul.
Dec  1 04:19:46 np0005540825 python3[8094]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  1 04:20:03 np0005540825 kernel: SELinux:  Converting 385 SID table entries...
Dec  1 04:20:03 np0005540825 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 04:20:03 np0005540825 kernel: SELinux:  policy capability open_perms=1
Dec  1 04:20:03 np0005540825 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 04:20:03 np0005540825 kernel: SELinux:  policy capability always_check_network=0
Dec  1 04:20:03 np0005540825 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 04:20:03 np0005540825 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 04:20:03 np0005540825 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 04:20:13 np0005540825 kernel: SELinux:  Converting 385 SID table entries...
Dec  1 04:20:13 np0005540825 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 04:20:13 np0005540825 kernel: SELinux:  policy capability open_perms=1
Dec  1 04:20:13 np0005540825 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 04:20:13 np0005540825 kernel: SELinux:  policy capability always_check_network=0
Dec  1 04:20:13 np0005540825 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 04:20:13 np0005540825 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 04:20:13 np0005540825 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 04:20:24 np0005540825 kernel: SELinux:  Converting 385 SID table entries...
Dec  1 04:20:24 np0005540825 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 04:20:24 np0005540825 kernel: SELinux:  policy capability open_perms=1
Dec  1 04:20:24 np0005540825 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 04:20:24 np0005540825 kernel: SELinux:  policy capability always_check_network=0
Dec  1 04:20:24 np0005540825 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 04:20:24 np0005540825 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 04:20:24 np0005540825 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 04:20:25 np0005540825 setsebool[8161]: The virt_use_nfs policy boolean was changed to 1 by root
Dec  1 04:20:25 np0005540825 setsebool[8161]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Dec  1 04:20:37 np0005540825 kernel: SELinux:  Converting 388 SID table entries...
Dec  1 04:20:37 np0005540825 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 04:20:37 np0005540825 kernel: SELinux:  policy capability open_perms=1
Dec  1 04:20:37 np0005540825 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 04:20:37 np0005540825 kernel: SELinux:  policy capability always_check_network=0
Dec  1 04:20:37 np0005540825 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 04:20:37 np0005540825 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 04:20:37 np0005540825 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 04:20:54 np0005540825 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec  1 04:20:55 np0005540825 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  1 04:20:55 np0005540825 systemd[1]: Starting man-db-cache-update.service...
Dec  1 04:20:55 np0005540825 systemd[1]: Reloading.
Dec  1 04:20:55 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:20:55 np0005540825 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  1 04:21:00 np0005540825 python3[13179]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163efc-24cc-8f55-108f-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:21:01 np0005540825 kernel: evm: overlay not supported
Dec  1 04:21:01 np0005540825 systemd[4303]: Starting D-Bus User Message Bus...
Dec  1 04:21:01 np0005540825 dbus-broker-launch[13896]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec  1 04:21:01 np0005540825 dbus-broker-launch[13896]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec  1 04:21:01 np0005540825 systemd[4303]: Started D-Bus User Message Bus.
Dec  1 04:21:01 np0005540825 dbus-broker-lau[13896]: Ready
Dec  1 04:21:01 np0005540825 systemd[4303]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec  1 04:21:01 np0005540825 systemd[4303]: Created slice Slice /user.
Dec  1 04:21:01 np0005540825 systemd[4303]: podman-13804.scope: unit configures an IP firewall, but not running as root.
Dec  1 04:21:01 np0005540825 systemd[4303]: (This warning is only shown for the first unit using IP firewalling.)
Dec  1 04:21:01 np0005540825 systemd[4303]: Started podman-13804.scope.
Dec  1 04:21:01 np0005540825 systemd[4303]: Started podman-pause-3cbe6847.scope.
Dec  1 04:21:02 np0005540825 python3[14011]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.51:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.51:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:21:02 np0005540825 python3[14011]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Dec  1 04:21:03 np0005540825 systemd[1]: session-5.scope: Deactivated successfully.
Dec  1 04:21:03 np0005540825 systemd[1]: session-5.scope: Consumed 1min 6.774s CPU time.
Dec  1 04:21:03 np0005540825 systemd-logind[789]: Session 5 logged out. Waiting for processes to exit.
Dec  1 04:21:03 np0005540825 systemd-logind[789]: Removed session 5.
Dec  1 04:21:27 np0005540825 systemd-logind[789]: New session 6 of user zuul.
Dec  1 04:21:27 np0005540825 systemd[1]: Started Session 6 of User zuul.
Dec  1 04:21:28 np0005540825 python3[23266]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGtq5pibPyVxGWB2xMqk4uL1zofeXFQ8syXRsXPs/DtqKO/PJ2juhFzgoD/wjEUo54K4dvZgfGufGjQyIWW2pRg= zuul@np0005540824.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 04:21:28 np0005540825 python3[23425]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGtq5pibPyVxGWB2xMqk4uL1zofeXFQ8syXRsXPs/DtqKO/PJ2juhFzgoD/wjEUo54K4dvZgfGufGjQyIWW2pRg= zuul@np0005540824.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 04:21:29 np0005540825 python3[23790]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005540825.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec  1 04:21:30 np0005540825 python3[24063]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGtq5pibPyVxGWB2xMqk4uL1zofeXFQ8syXRsXPs/DtqKO/PJ2juhFzgoD/wjEUo54K4dvZgfGufGjQyIWW2pRg= zuul@np0005540824.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 04:21:30 np0005540825 python3[24301]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 04:21:31 np0005540825 python3[24520]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764580890.5682015-167-129682070368702/source _original_basename=tmpb5_hi8mo follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:21:33 np0005540825 python3[25429]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Dec  1 04:21:33 np0005540825 systemd[1]: Starting Hostname Service...
Dec  1 04:21:33 np0005540825 systemd[1]: Started Hostname Service.
Dec  1 04:21:33 np0005540825 systemd-hostnamed[25534]: Changed pretty hostname to 'compute-0'
Dec  1 04:21:33 np0005540825 systemd-hostnamed[25534]: Hostname set to <compute-0> (static)
Dec  1 04:21:33 np0005540825 NetworkManager[7187]: <info>  [1764580893.8986] hostname: static hostname changed from "np0005540825.novalocal" to "compute-0"
Dec  1 04:21:33 np0005540825 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  1 04:21:33 np0005540825 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  1 04:21:34 np0005540825 systemd[1]: session-6.scope: Deactivated successfully.
Dec  1 04:21:34 np0005540825 systemd[1]: session-6.scope: Consumed 2.468s CPU time.
Dec  1 04:21:34 np0005540825 systemd-logind[789]: Session 6 logged out. Waiting for processes to exit.
Dec  1 04:21:34 np0005540825 systemd-logind[789]: Removed session 6.
Dec  1 04:21:43 np0005540825 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  1 04:21:49 np0005540825 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  1 04:21:49 np0005540825 systemd[1]: Finished man-db-cache-update.service.
Dec  1 04:21:49 np0005540825 systemd[1]: man-db-cache-update.service: Consumed 1min 4.969s CPU time.
Dec  1 04:21:49 np0005540825 systemd[1]: run-rc4bbbf2e82864a9bb2a55cef883db581.service: Deactivated successfully.
Dec  1 04:22:03 np0005540825 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  1 04:24:29 np0005540825 systemd[1]: Starting Cleanup of Temporary Directories...
Dec  1 04:24:29 np0005540825 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Dec  1 04:24:29 np0005540825 systemd[1]: Finished Cleanup of Temporary Directories.
Dec  1 04:24:29 np0005540825 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Dec  1 04:25:51 np0005540825 systemd-logind[789]: New session 7 of user zuul.
Dec  1 04:25:51 np0005540825 systemd[1]: Started Session 7 of User zuul.
Dec  1 04:25:51 np0005540825 python3[30053]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:25:53 np0005540825 python3[30169]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 04:25:54 np0005540825 python3[30242]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764581153.5290158-34000-265769694436052/source mode=0755 _original_basename=delorean.repo follow=False checksum=39c885eb875fd03e010d1b0454241c26b121dfb2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:25:54 np0005540825 python3[30268]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 04:25:54 np0005540825 python3[30341]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764581153.5290158-34000-265769694436052/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:25:55 np0005540825 python3[30367]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 04:25:55 np0005540825 python3[30440]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764581153.5290158-34000-265769694436052/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:25:55 np0005540825 python3[30466]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 04:25:56 np0005540825 python3[30539]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764581153.5290158-34000-265769694436052/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:25:56 np0005540825 python3[30565]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 04:25:56 np0005540825 python3[30638]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764581153.5290158-34000-265769694436052/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:25:56 np0005540825 python3[30664]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 04:25:57 np0005540825 python3[30737]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764581153.5290158-34000-265769694436052/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:25:57 np0005540825 python3[30763]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 04:25:57 np0005540825 python3[30836]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764581153.5290158-34000-265769694436052/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6e18e2038d54303b4926db53c0b6cced515a9151 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:26:10 np0005540825 python3[30894]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:31:10 np0005540825 systemd[1]: session-7.scope: Deactivated successfully.
Dec  1 04:31:10 np0005540825 systemd[1]: session-7.scope: Consumed 4.741s CPU time.
Dec  1 04:31:10 np0005540825 systemd-logind[789]: Session 7 logged out. Waiting for processes to exit.
Dec  1 04:31:10 np0005540825 systemd-logind[789]: Removed session 7.
Dec  1 04:38:03 np0005540825 systemd-logind[789]: New session 8 of user zuul.
Dec  1 04:38:03 np0005540825 systemd[1]: Started Session 8 of User zuul.
Dec  1 04:38:05 np0005540825 python3.9[31072]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:38:06 np0005540825 python3.9[31253]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:38:14 np0005540825 systemd-logind[789]: Session 8 logged out. Waiting for processes to exit.
Dec  1 04:38:14 np0005540825 systemd[1]: session-8.scope: Deactivated successfully.
Dec  1 04:38:14 np0005540825 systemd[1]: session-8.scope: Consumed 8.396s CPU time.
Dec  1 04:38:14 np0005540825 systemd-logind[789]: Removed session 8.
Dec  1 04:38:29 np0005540825 systemd-logind[789]: New session 9 of user zuul.
Dec  1 04:38:29 np0005540825 systemd[1]: Started Session 9 of User zuul.
Dec  1 04:38:30 np0005540825 python3.9[31465]: ansible-ansible.legacy.ping Invoked with data=pong
Dec  1 04:38:31 np0005540825 python3.9[31639]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:38:32 np0005540825 python3.9[31791]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:38:33 np0005540825 python3.9[31944]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:38:34 np0005540825 python3.9[32096]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:38:35 np0005540825 python3.9[32248]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:38:36 np0005540825 python3.9[32371]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764581915.0101836-177-230246208194569/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:38:37 np0005540825 python3.9[32523]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:38:38 np0005540825 python3.9[32679]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:38:38 np0005540825 python3.9[32831]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:38:39 np0005540825 python3.9[32981]: ansible-ansible.builtin.service_facts Invoked
Dec  1 04:38:45 np0005540825 python3.9[33234]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:38:45 np0005540825 python3.9[33384]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:38:47 np0005540825 python3.9[33538]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:38:48 np0005540825 python3.9[33698]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 04:38:49 np0005540825 python3.9[33782]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 04:39:29 np0005540825 systemd[1]: Reloading.
Dec  1 04:39:29 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:39:29 np0005540825 systemd[1]: Starting dnf makecache...
Dec  1 04:39:29 np0005540825 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Dec  1 04:39:30 np0005540825 dnf[33989]: Failed determining last makecache time.
Dec  1 04:39:30 np0005540825 dnf[33989]: delorean-openstack-barbican-42b4c41831408a8e323 166 kB/s | 3.0 kB     00:00
Dec  1 04:39:30 np0005540825 dnf[33989]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 204 kB/s | 3.0 kB     00:00
Dec  1 04:39:30 np0005540825 systemd[1]: Reloading.
Dec  1 04:39:30 np0005540825 dnf[33989]: delorean-openstack-cinder-1c00d6490d88e436f26ef 196 kB/s | 3.0 kB     00:00
Dec  1 04:39:30 np0005540825 dnf[33989]: delorean-python-stevedore-c4acc5639fd2329372142 202 kB/s | 3.0 kB     00:00
Dec  1 04:39:30 np0005540825 dnf[33989]: delorean-python-cloudkitty-tests-tempest-2c80f8 183 kB/s | 3.0 kB     00:00
Dec  1 04:39:30 np0005540825 dnf[33989]: delorean-os-net-config-d0cedbdb788d43e5c7551df5 186 kB/s | 3.0 kB     00:00
Dec  1 04:39:30 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:39:30 np0005540825 dnf[33989]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 145 kB/s | 3.0 kB     00:00
Dec  1 04:39:30 np0005540825 dnf[33989]: delorean-python-designate-tests-tempest-347fdbc 189 kB/s | 3.0 kB     00:00
Dec  1 04:39:30 np0005540825 dnf[33989]: delorean-openstack-glance-1fd12c29b339f30fe823e 202 kB/s | 3.0 kB     00:00
Dec  1 04:39:30 np0005540825 dnf[33989]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 197 kB/s | 3.0 kB     00:00
Dec  1 04:39:30 np0005540825 dnf[33989]: delorean-openstack-manila-3c01b7181572c95dac462 204 kB/s | 3.0 kB     00:00
Dec  1 04:39:30 np0005540825 dnf[33989]: delorean-python-whitebox-neutron-tests-tempest- 198 kB/s | 3.0 kB     00:00
Dec  1 04:39:30 np0005540825 dnf[33989]: delorean-openstack-octavia-ba397f07a7331190208c 202 kB/s | 3.0 kB     00:00
Dec  1 04:39:30 np0005540825 dnf[33989]: delorean-openstack-watcher-c014f81a8647287f6dcc 184 kB/s | 3.0 kB     00:00
Dec  1 04:39:30 np0005540825 dnf[33989]: delorean-ansible-config_template-5ccaa22121a7ff 198 kB/s | 3.0 kB     00:00
Dec  1 04:39:30 np0005540825 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Dec  1 04:39:30 np0005540825 dnf[33989]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 168 kB/s | 3.0 kB     00:00
Dec  1 04:39:30 np0005540825 dnf[33989]: delorean-openstack-swift-dc98a8463506ac520c469a 189 kB/s | 3.0 kB     00:00
Dec  1 04:39:30 np0005540825 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Dec  1 04:39:30 np0005540825 dnf[33989]: delorean-python-tempestconf-8515371b7cceebd4282 193 kB/s | 3.0 kB     00:00
Dec  1 04:39:30 np0005540825 dnf[33989]: delorean-openstack-heat-ui-013accbfd179753bc3f0 199 kB/s | 3.0 kB     00:00
Dec  1 04:39:30 np0005540825 systemd[1]: Reloading.
Dec  1 04:39:30 np0005540825 dnf[33989]: CentOS Stream 9 - BaseOS                         83 kB/s | 7.3 kB     00:00
Dec  1 04:39:30 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:39:30 np0005540825 systemd[1]: Listening on LVM2 poll daemon socket.
Dec  1 04:39:30 np0005540825 dnf[33989]: CentOS Stream 9 - AppStream                      28 kB/s | 7.4 kB     00:00
Dec  1 04:39:30 np0005540825 dbus-broker-launch[764]: Noticed file-system modification, trigger reload.
Dec  1 04:39:30 np0005540825 dbus-broker-launch[764]: Noticed file-system modification, trigger reload.
Dec  1 04:39:30 np0005540825 dbus-broker-launch[764]: Noticed file-system modification, trigger reload.
Dec  1 04:39:31 np0005540825 dnf[33989]: CentOS Stream 9 - CRB                            30 kB/s | 7.2 kB     00:00
Dec  1 04:39:31 np0005540825 dnf[33989]: CentOS Stream 9 - Extras packages                85 kB/s | 8.3 kB     00:00
Dec  1 04:39:31 np0005540825 dnf[33989]: dlrn-antelope-testing                           154 kB/s | 3.0 kB     00:00
Dec  1 04:39:31 np0005540825 dnf[33989]: dlrn-antelope-build-deps                        143 kB/s | 3.0 kB     00:00
Dec  1 04:39:31 np0005540825 dnf[33989]: centos9-rabbitmq                                115 kB/s | 3.0 kB     00:00
Dec  1 04:39:31 np0005540825 dnf[33989]: centos9-storage                                 116 kB/s | 3.0 kB     00:00
Dec  1 04:39:31 np0005540825 dnf[33989]: centos9-opstools                                122 kB/s | 3.0 kB     00:00
Dec  1 04:39:31 np0005540825 dnf[33989]: NFV SIG OpenvSwitch                             124 kB/s | 3.0 kB     00:00
Dec  1 04:39:31 np0005540825 dnf[33989]: repo-setup-centos-appstream                     164 kB/s | 4.4 kB     00:00
Dec  1 04:39:31 np0005540825 dnf[33989]: repo-setup-centos-baseos                        181 kB/s | 3.9 kB     00:00
Dec  1 04:39:31 np0005540825 dnf[33989]: repo-setup-centos-highavailability              182 kB/s | 3.9 kB     00:00
Dec  1 04:39:31 np0005540825 dnf[33989]: repo-setup-centos-powertools                    173 kB/s | 4.3 kB     00:00
Dec  1 04:39:31 np0005540825 dnf[33989]: Extra Packages for Enterprise Linux 9 - x86_64  205 kB/s |  30 kB     00:00
Dec  1 04:39:32 np0005540825 dnf[33989]: Metadata cache created.
Dec  1 04:39:32 np0005540825 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec  1 04:39:32 np0005540825 systemd[1]: Finished dnf makecache.
Dec  1 04:39:32 np0005540825 systemd[1]: dnf-makecache.service: Consumed 1.822s CPU time.
Dec  1 04:40:35 np0005540825 kernel: SELinux:  Converting 2718 SID table entries...
Dec  1 04:40:35 np0005540825 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 04:40:35 np0005540825 kernel: SELinux:  policy capability open_perms=1
Dec  1 04:40:35 np0005540825 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 04:40:35 np0005540825 kernel: SELinux:  policy capability always_check_network=0
Dec  1 04:40:35 np0005540825 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 04:40:35 np0005540825 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 04:40:35 np0005540825 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 04:40:35 np0005540825 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Dec  1 04:40:35 np0005540825 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  1 04:40:35 np0005540825 systemd[1]: Starting man-db-cache-update.service...
Dec  1 04:40:35 np0005540825 systemd[1]: Reloading.
Dec  1 04:40:35 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:40:36 np0005540825 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  1 04:40:37 np0005540825 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  1 04:40:37 np0005540825 systemd[1]: Finished man-db-cache-update.service.
Dec  1 04:40:37 np0005540825 systemd[1]: man-db-cache-update.service: Consumed 1.591s CPU time.
Dec  1 04:40:37 np0005540825 systemd[1]: run-rc6eeda39137f451caf7933d7363d3c87.service: Deactivated successfully.
Dec  1 04:40:38 np0005540825 python3.9[35347]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:40:41 np0005540825 python3.9[35628]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec  1 04:40:42 np0005540825 python3.9[35780]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec  1 04:40:44 np0005540825 python3.9[35933]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:40:45 np0005540825 python3.9[36086]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec  1 04:40:47 np0005540825 python3.9[36238]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:40:48 np0005540825 python3.9[36390]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:40:51 np0005540825 python3.9[36513]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764582047.5675075-666-69829801641092/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8c8748787c49c5bdccd5df153e138fac81f5459e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:40:52 np0005540825 python3.9[36667]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:40:54 np0005540825 python3.9[36819]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:40:56 np0005540825 python3.9[36972]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:40:57 np0005540825 python3.9[37124]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec  1 04:40:57 np0005540825 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 04:40:58 np0005540825 python3.9[37278]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  1 04:40:59 np0005540825 python3.9[37436]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  1 04:41:00 np0005540825 python3.9[37596]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec  1 04:41:01 np0005540825 python3.9[37749]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  1 04:41:02 np0005540825 python3.9[37907]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec  1 04:41:03 np0005540825 python3.9[38059]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 04:41:06 np0005540825 python3.9[38212]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:41:07 np0005540825 python3.9[38364]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:41:07 np0005540825 python3.9[38487]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764582066.5151756-1023-41982950276343/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:41:08 np0005540825 python3.9[38639]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 04:41:09 np0005540825 systemd[1]: Starting Load Kernel Modules...
Dec  1 04:41:09 np0005540825 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec  1 04:41:09 np0005540825 kernel: Bridge firewalling registered
Dec  1 04:41:09 np0005540825 systemd-modules-load[38643]: Inserted module 'br_netfilter'
Dec  1 04:41:09 np0005540825 systemd[1]: Finished Load Kernel Modules.
Dec  1 04:41:09 np0005540825 python3.9[38798]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:41:10 np0005540825 python3.9[38921]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764582069.299241-1092-209896688293765/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:41:11 np0005540825 python3.9[39073]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 04:41:14 np0005540825 dbus-broker-launch[764]: Noticed file-system modification, trigger reload.
Dec  1 04:41:14 np0005540825 dbus-broker-launch[764]: Noticed file-system modification, trigger reload.
Dec  1 04:41:15 np0005540825 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  1 04:41:15 np0005540825 systemd[1]: Starting man-db-cache-update.service...
Dec  1 04:41:15 np0005540825 systemd[1]: Reloading.
Dec  1 04:41:15 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:41:15 np0005540825 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  1 04:41:17 np0005540825 python3.9[40799]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:41:18 np0005540825 python3.9[42352]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec  1 04:41:19 np0005540825 python3.9[42998]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:41:19 np0005540825 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  1 04:41:19 np0005540825 systemd[1]: Finished man-db-cache-update.service.
Dec  1 04:41:19 np0005540825 systemd[1]: man-db-cache-update.service: Consumed 5.312s CPU time.
Dec  1 04:41:19 np0005540825 systemd[1]: run-rc927ff962a604c22b7dc54faa817273b.service: Deactivated successfully.
Dec  1 04:41:20 np0005540825 python3.9[43235]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:41:20 np0005540825 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  1 04:41:20 np0005540825 systemd[1]: Starting Authorization Manager...
Dec  1 04:41:21 np0005540825 polkitd[43452]: Started polkitd version 0.117
Dec  1 04:41:21 np0005540825 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  1 04:41:21 np0005540825 systemd[1]: Started Authorization Manager.
Dec  1 04:41:22 np0005540825 python3.9[43622]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:41:22 np0005540825 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec  1 04:41:22 np0005540825 systemd[1]: tuned.service: Deactivated successfully.
Dec  1 04:41:22 np0005540825 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec  1 04:41:22 np0005540825 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  1 04:41:22 np0005540825 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  1 04:41:23 np0005540825 python3.9[43784]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec  1 04:41:27 np0005540825 python3.9[43936]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:41:27 np0005540825 systemd[1]: Reloading.
Dec  1 04:41:27 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:41:28 np0005540825 python3.9[44124]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:41:28 np0005540825 systemd[1]: Reloading.
Dec  1 04:41:28 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:41:29 np0005540825 python3.9[44313]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:41:30 np0005540825 python3.9[44466]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:41:30 np0005540825 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Dec  1 04:41:30 np0005540825 python3.9[44619]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:41:33 np0005540825 python3.9[44781]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:41:33 np0005540825 python3.9[44934]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 04:41:33 np0005540825 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec  1 04:41:33 np0005540825 systemd[1]: Stopped Apply Kernel Variables.
Dec  1 04:41:33 np0005540825 systemd[1]: Stopping Apply Kernel Variables...
Dec  1 04:41:34 np0005540825 systemd[1]: Starting Apply Kernel Variables...
Dec  1 04:41:34 np0005540825 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec  1 04:41:34 np0005540825 systemd[1]: Finished Apply Kernel Variables.
Dec  1 04:41:34 np0005540825 systemd[1]: session-9.scope: Deactivated successfully.
Dec  1 04:41:34 np0005540825 systemd[1]: session-9.scope: Consumed 2min 20.335s CPU time.
Dec  1 04:41:34 np0005540825 systemd-logind[789]: Session 9 logged out. Waiting for processes to exit.
Dec  1 04:41:34 np0005540825 systemd-logind[789]: Removed session 9.
Dec  1 04:41:40 np0005540825 systemd-logind[789]: New session 10 of user zuul.
Dec  1 04:41:40 np0005540825 systemd[1]: Started Session 10 of User zuul.
Dec  1 04:41:41 np0005540825 python3.9[45117]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:41:42 np0005540825 python3.9[45273]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec  1 04:41:43 np0005540825 python3.9[45426]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  1 04:41:44 np0005540825 python3.9[45584]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  1 04:41:45 np0005540825 python3.9[45744]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 04:41:46 np0005540825 python3.9[45828]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  1 04:41:50 np0005540825 python3.9[45993]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 04:42:01 np0005540825 kernel: SELinux:  Converting 2730 SID table entries...
Dec  1 04:42:01 np0005540825 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 04:42:01 np0005540825 kernel: SELinux:  policy capability open_perms=1
Dec  1 04:42:01 np0005540825 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 04:42:01 np0005540825 kernel: SELinux:  policy capability always_check_network=0
Dec  1 04:42:01 np0005540825 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 04:42:01 np0005540825 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 04:42:01 np0005540825 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 04:42:01 np0005540825 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Dec  1 04:42:01 np0005540825 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Dec  1 04:42:03 np0005540825 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  1 04:42:03 np0005540825 systemd[1]: Starting man-db-cache-update.service...
Dec  1 04:42:03 np0005540825 systemd[1]: Reloading.
Dec  1 04:42:03 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:42:03 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:42:03 np0005540825 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  1 04:42:04 np0005540825 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  1 04:42:04 np0005540825 systemd[1]: Finished man-db-cache-update.service.
Dec  1 04:42:04 np0005540825 systemd[1]: man-db-cache-update.service: Consumed 1.103s CPU time.
Dec  1 04:42:04 np0005540825 systemd[1]: run-ra484b14951d747e2a32ffce8c8d077d7.service: Deactivated successfully.
Dec  1 04:42:08 np0005540825 python3.9[47092]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  1 04:42:08 np0005540825 systemd[1]: Reloading.
Dec  1 04:42:08 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:42:08 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:42:08 np0005540825 systemd[1]: Starting Open vSwitch Database Unit...
Dec  1 04:42:08 np0005540825 chown[47134]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Dec  1 04:42:08 np0005540825 ovs-ctl[47139]: /etc/openvswitch/conf.db does not exist ... (warning).
Dec  1 04:42:08 np0005540825 ovs-ctl[47139]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Dec  1 04:42:09 np0005540825 ovs-ctl[47139]: Starting ovsdb-server [  OK  ]
Dec  1 04:42:09 np0005540825 ovs-vsctl[47188]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Dec  1 04:42:09 np0005540825 ovs-vsctl[47204]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"4d9738cf-2abf-48e2-9303-677669784912\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Dec  1 04:42:09 np0005540825 ovs-ctl[47139]: Configuring Open vSwitch system IDs [  OK  ]
Dec  1 04:42:09 np0005540825 ovs-vsctl[47210]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec  1 04:42:09 np0005540825 ovs-ctl[47139]: Enabling remote OVSDB managers [  OK  ]
Dec  1 04:42:09 np0005540825 systemd[1]: Started Open vSwitch Database Unit.
Dec  1 04:42:09 np0005540825 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Dec  1 04:42:09 np0005540825 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Dec  1 04:42:09 np0005540825 systemd[1]: Starting Open vSwitch Forwarding Unit...
Dec  1 04:42:09 np0005540825 kernel: openvswitch: Open vSwitch switching datapath
Dec  1 04:42:09 np0005540825 ovs-ctl[47258]: Inserting openvswitch module [  OK  ]
Dec  1 04:42:09 np0005540825 ovs-ctl[47227]: Starting ovs-vswitchd [  OK  ]
Dec  1 04:42:09 np0005540825 ovs-ctl[47227]: Enabling remote OVSDB managers [  OK  ]
Dec  1 04:42:09 np0005540825 systemd[1]: Started Open vSwitch Forwarding Unit.
Dec  1 04:42:09 np0005540825 ovs-vsctl[47276]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec  1 04:42:09 np0005540825 systemd[1]: Starting Open vSwitch...
Dec  1 04:42:09 np0005540825 systemd[1]: Finished Open vSwitch.
Dec  1 04:42:10 np0005540825 python3.9[47427]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:42:11 np0005540825 python3.9[47579]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec  1 04:42:12 np0005540825 kernel: SELinux:  Converting 2744 SID table entries...
Dec  1 04:42:12 np0005540825 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 04:42:12 np0005540825 kernel: SELinux:  policy capability open_perms=1
Dec  1 04:42:12 np0005540825 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 04:42:12 np0005540825 kernel: SELinux:  policy capability always_check_network=0
Dec  1 04:42:12 np0005540825 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 04:42:12 np0005540825 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 04:42:12 np0005540825 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 04:42:13 np0005540825 python3.9[47734]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:42:14 np0005540825 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Dec  1 04:42:14 np0005540825 python3.9[47892]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 04:42:17 np0005540825 python3.9[48045]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:42:18 np0005540825 python3.9[48332]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  1 04:42:19 np0005540825 python3.9[48482]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:42:20 np0005540825 python3.9[48636]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 04:42:22 np0005540825 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  1 04:42:22 np0005540825 systemd[1]: Starting man-db-cache-update.service...
Dec  1 04:42:22 np0005540825 systemd[1]: Reloading.
Dec  1 04:42:22 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:42:22 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:42:22 np0005540825 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  1 04:42:22 np0005540825 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  1 04:42:22 np0005540825 systemd[1]: Finished man-db-cache-update.service.
Dec  1 04:42:22 np0005540825 systemd[1]: run-r9df96de783804d14a3bb0baaa90a8130.service: Deactivated successfully.
Dec  1 04:42:24 np0005540825 python3.9[48953]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 04:42:24 np0005540825 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec  1 04:42:24 np0005540825 systemd[1]: Stopped Network Manager Wait Online.
Dec  1 04:42:24 np0005540825 systemd[1]: Stopping Network Manager Wait Online...
Dec  1 04:42:24 np0005540825 systemd[1]: Stopping Network Manager...
Dec  1 04:42:24 np0005540825 NetworkManager[7187]: <info>  [1764582144.2758] caught SIGTERM, shutting down normally.
Dec  1 04:42:24 np0005540825 NetworkManager[7187]: <info>  [1764582144.2778] dhcp4 (eth0): canceled DHCP transaction
Dec  1 04:42:24 np0005540825 NetworkManager[7187]: <info>  [1764582144.2778] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  1 04:42:24 np0005540825 NetworkManager[7187]: <info>  [1764582144.2778] dhcp4 (eth0): state changed no lease
Dec  1 04:42:24 np0005540825 NetworkManager[7187]: <info>  [1764582144.2780] manager: NetworkManager state is now CONNECTED_SITE
Dec  1 04:42:24 np0005540825 NetworkManager[7187]: <info>  [1764582144.2833] exiting (success)
Dec  1 04:42:24 np0005540825 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  1 04:42:24 np0005540825 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec  1 04:42:24 np0005540825 systemd[1]: Stopped Network Manager.
Dec  1 04:42:24 np0005540825 systemd[1]: NetworkManager.service: Consumed 12.112s CPU time, 4.3M memory peak, read 0B from disk, written 23.0K to disk.
Dec  1 04:42:24 np0005540825 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  1 04:42:24 np0005540825 systemd[1]: Starting Network Manager...
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.3458] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:f3f81c00-6df2-4ea1-97f9-33d871af0070)
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.3459] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.3511] manager[0x559906ab8090]: monitoring kernel firmware directory '/lib/firmware'.
Dec  1 04:42:24 np0005540825 systemd[1]: Starting Hostname Service...
Dec  1 04:42:24 np0005540825 systemd[1]: Started Hostname Service.
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4257] hostname: hostname: using hostnamed
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4257] hostname: static hostname changed from (none) to "compute-0"
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4263] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4269] manager[0x559906ab8090]: rfkill: Wi-Fi hardware radio set enabled
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4270] manager[0x559906ab8090]: rfkill: WWAN hardware radio set enabled
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4294] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4305] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4306] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4306] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4307] manager: Networking is enabled by state file
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4309] settings: Loaded settings plugin: keyfile (internal)
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4313] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4342] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4354] dhcp: init: Using DHCP client 'internal'
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4359] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4364] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4369] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4378] device (lo): Activation: starting connection 'lo' (9cf04f40-f2df-4143-8f8e-28f6ca572455)
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4385] device (eth0): carrier: link connected
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4391] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4396] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4398] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4404] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4410] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4417] device (eth1): carrier: link connected
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4421] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4425] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (cf876690-0410-53d6-9ecb-0fe69a303d1c) (indicated)
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4426] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4430] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4437] device (eth1): Activation: starting connection 'ci-private-network' (cf876690-0410-53d6-9ecb-0fe69a303d1c)
Dec  1 04:42:24 np0005540825 systemd[1]: Started Network Manager.
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4443] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4450] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4453] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4455] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4457] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4460] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4462] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4466] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4470] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4477] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4480] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4490] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4505] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4515] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4518] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4524] device (lo): Activation: successful, device activated.
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4533] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4536] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4540] manager: NetworkManager state is now CONNECTED_LOCAL
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.4544] device (eth1): Activation: successful, device activated.
Dec  1 04:42:24 np0005540825 systemd[1]: Starting Network Manager Wait Online...
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.7942] dhcp4 (eth0): state changed new lease, address=38.102.83.181
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.7957] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.8046] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.8086] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.8088] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.8095] manager: NetworkManager state is now CONNECTED_SITE
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.8101] device (eth0): Activation: successful, device activated.
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.8110] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  1 04:42:24 np0005540825 NetworkManager[48963]: <info>  [1764582144.8115] manager: startup complete
Dec  1 04:42:24 np0005540825 systemd[1]: Finished Network Manager Wait Online.
Dec  1 04:42:25 np0005540825 python3.9[49179]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 04:42:29 np0005540825 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  1 04:42:29 np0005540825 systemd[1]: Starting man-db-cache-update.service...
Dec  1 04:42:29 np0005540825 systemd[1]: Reloading.
Dec  1 04:42:29 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:42:29 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:42:29 np0005540825 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  1 04:42:30 np0005540825 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  1 04:42:30 np0005540825 systemd[1]: Finished man-db-cache-update.service.
Dec  1 04:42:30 np0005540825 systemd[1]: run-r44f90512247544cfad0ec864d444cbbb.service: Deactivated successfully.
Dec  1 04:42:33 np0005540825 python3.9[49639]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:42:33 np0005540825 python3.9[49791]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:42:34 np0005540825 python3.9[49945]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:42:34 np0005540825 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  1 04:42:35 np0005540825 python3.9[50097]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:42:36 np0005540825 python3.9[50249]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:42:36 np0005540825 python3.9[50401]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:42:37 np0005540825 python3.9[50553]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:42:38 np0005540825 python3.9[50676]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764582156.9395168-647-163269824089192/.source _original_basename=.tlg4xff6 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:42:38 np0005540825 python3.9[50828]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:42:39 np0005540825 python3.9[50980]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Dec  1 04:42:40 np0005540825 python3.9[51132]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:42:42 np0005540825 python3.9[51559]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Dec  1 04:42:43 np0005540825 ansible-async_wrapper.py[51734]: Invoked with j260373114879 300 /home/zuul/.ansible/tmp/ansible-tmp-1764582163.1566036-845-6791136422945/AnsiballZ_edpm_os_net_config.py _
Dec  1 04:42:44 np0005540825 ansible-async_wrapper.py[51737]: Starting module and watcher
Dec  1 04:42:44 np0005540825 ansible-async_wrapper.py[51737]: Start watching 51738 (300)
Dec  1 04:42:44 np0005540825 ansible-async_wrapper.py[51738]: Start module (51738)
Dec  1 04:42:44 np0005540825 ansible-async_wrapper.py[51734]: Return async_wrapper task started.
Dec  1 04:42:44 np0005540825 python3.9[51739]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Dec  1 04:42:44 np0005540825 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Dec  1 04:42:44 np0005540825 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Dec  1 04:42:44 np0005540825 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Dec  1 04:42:44 np0005540825 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Dec  1 04:42:44 np0005540825 kernel: cfg80211: failed to load regulatory.db
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.1232] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51740 uid=0 result="success"
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.1250] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51740 uid=0 result="success"
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.1882] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.1884] audit: op="connection-add" uuid="7f84865b-3f28-44d4-8208-8a07661a5624" name="br-ex-br" pid=51740 uid=0 result="success"
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.1900] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.1901] audit: op="connection-add" uuid="2ca810ad-9898-4102-b4fa-39934e03e5a8" name="br-ex-port" pid=51740 uid=0 result="success"
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.1916] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.1918] audit: op="connection-add" uuid="40aeaf4a-2802-485b-94fe-cb78c30edb42" name="eth1-port" pid=51740 uid=0 result="success"
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.1930] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.1932] audit: op="connection-add" uuid="1cb3dab5-428d-4a81-8ad8-fa60d7805b0a" name="vlan20-port" pid=51740 uid=0 result="success"
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.1948] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.1950] audit: op="connection-add" uuid="e05cbc94-522d-4ca5-a2e2-b369b2f61619" name="vlan21-port" pid=51740 uid=0 result="success"
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.1963] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.1965] audit: op="connection-add" uuid="a03680e0-2e9b-4bcf-aa8a-bbdf853a3501" name="vlan22-port" pid=51740 uid=0 result="success"
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.1977] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.1980] audit: op="connection-add" uuid="3647ca44-99e5-48b1-a19f-c3687b5c561f" name="vlan23-port" pid=51740 uid=0 result="success"
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2001] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method,connection.timestamp,connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu" pid=51740 uid=0 result="success"
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2018] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2020] audit: op="connection-add" uuid="00643831-0b75-410f-a9d8-962b5c7547ef" name="br-ex-if" pid=51740 uid=0 result="success"
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2069] audit: op="connection-update" uuid="cf876690-0410-53d6-9ecb-0fe69a303d1c" name="ci-private-network" args="ipv6.dns,ipv6.routes,ipv6.routing-rules,ipv6.addr-gen-mode,ipv6.addresses,ipv6.method,ovs-interface.type,connection.controller,connection.master,connection.port-type,connection.timestamp,connection.slave-type,ipv4.dns,ipv4.routes,ipv4.routing-rules,ipv4.never-default,ipv4.addresses,ipv4.method,ovs-external-ids.data" pid=51740 uid=0 result="success"
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2089] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2091] audit: op="connection-add" uuid="a9674a2a-de77-41fd-aa37-9679662a296b" name="vlan20-if" pid=51740 uid=0 result="success"
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2111] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2113] audit: op="connection-add" uuid="cff8b914-6c63-4a84-a73a-fe71657982b6" name="vlan21-if" pid=51740 uid=0 result="success"
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2132] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2134] audit: op="connection-add" uuid="c0ff7e8d-3159-4009-bafb-fbf20a79c6cd" name="vlan22-if" pid=51740 uid=0 result="success"
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2153] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2155] audit: op="connection-add" uuid="d9ff1e4e-b91d-4ff3-a2a0-6c00c5def14e" name="vlan23-if" pid=51740 uid=0 result="success"
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2171] audit: op="connection-delete" uuid="4c8aaa08-05a9-3821-9575-0ca27c8b2493" name="Wired connection 1" pid=51740 uid=0 result="success"
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2188] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2197] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2202] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (7f84865b-3f28-44d4-8208-8a07661a5624)
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2203] audit: op="connection-activate" uuid="7f84865b-3f28-44d4-8208-8a07661a5624" name="br-ex-br" pid=51740 uid=0 result="success"
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2205] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2212] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2216] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (2ca810ad-9898-4102-b4fa-39934e03e5a8)
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2218] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2223] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2227] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (40aeaf4a-2802-485b-94fe-cb78c30edb42)
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2229] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2236] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2239] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (1cb3dab5-428d-4a81-8ad8-fa60d7805b0a)
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2241] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2248] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2251] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (e05cbc94-522d-4ca5-a2e2-b369b2f61619)
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2254] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2259] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2263] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (a03680e0-2e9b-4bcf-aa8a-bbdf853a3501)
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2265] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2271] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2275] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (3647ca44-99e5-48b1-a19f-c3687b5c561f)
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2276] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2279] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2281] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2286] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2291] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2295] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (00643831-0b75-410f-a9d8-962b5c7547ef)
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2296] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2299] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2302] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2304] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2306] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2316] device (eth1): disconnecting for new activation request.
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2317] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2321] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2323] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2324] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2328] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2332] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2336] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (a9674a2a-de77-41fd-aa37-9679662a296b)
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2338] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2340] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2342] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2344] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2347] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2354] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2359] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (cff8b914-6c63-4a84-a73a-fe71657982b6)
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2360] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2364] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2367] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2369] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2373] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2380] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2384] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (c0ff7e8d-3159-4009-bafb-fbf20a79c6cd)
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2385] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2388] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2391] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2392] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2395] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2400] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2404] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (d9ff1e4e-b91d-4ff3-a2a0-6c00c5def14e)
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2406] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2409] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2411] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2413] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2415] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2429] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv6.addr-gen-mode,ipv6.method,connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu" pid=51740 uid=0 result="success"
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2432] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2436] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2439] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2447] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2453] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 kernel: ovs-system: entered promiscuous mode
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2469] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2476] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2479] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2486] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2492] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 kernel: Timeout policy base is empty
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2497] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2499] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 systemd-udevd[51744]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2505] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2512] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2516] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2518] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2524] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2531] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2547] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2549] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2556] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2564] dhcp4 (eth0): canceled DHCP transaction
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2565] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2565] dhcp4 (eth0): state changed no lease
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2568] dhcp4 (eth0): activation: beginning transaction (no timeout)
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2588] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2596] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51740 uid=0 result="fail" reason="Device is not activated"
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2606] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Dec  1 04:42:46 np0005540825 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2686] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2692] dhcp4 (eth0): state changed new lease, address=38.102.83.181
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2704] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2748] device (eth1): disconnecting for new activation request.
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2749] audit: op="connection-activate" uuid="cf876690-0410-53d6-9ecb-0fe69a303d1c" name="ci-private-network" pid=51740 uid=0 result="success"
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2749] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2818] device (eth1): Activation: starting connection 'ci-private-network' (cf876690-0410-53d6-9ecb-0fe69a303d1c)
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2821] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2823] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2825] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2827] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2830] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2842] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2846] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2852] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2856] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2860] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2864] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2866] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2869] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2873] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2876] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2879] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2881] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2883] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2885] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51740 uid=0 result="success"
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2888] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 kernel: br-ex: entered promiscuous mode
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2895] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2899] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2903] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2907] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2912] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2916] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2926] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2930] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2965] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2967] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.2973] device (eth1): Activation: successful, device activated.
Dec  1 04:42:46 np0005540825 kernel: vlan22: entered promiscuous mode
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.3028] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.3037] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.3049] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.3051] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 systemd-udevd[51746]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.3060] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  1 04:42:46 np0005540825 kernel: vlan23: entered promiscuous mode
Dec  1 04:42:46 np0005540825 kernel: vlan20: entered promiscuous mode
Dec  1 04:42:46 np0005540825 systemd-udevd[51745]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.3188] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.3197] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.3212] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.3223] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.3234] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.3236] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 kernel: vlan21: entered promiscuous mode
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.3240] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.3290] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.3291] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.3293] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.3299] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.3316] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.3349] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.3351] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.3357] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.3395] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.3405] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.3420] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.3421] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 04:42:46 np0005540825 NetworkManager[48963]: <info>  [1764582166.3426] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  1 04:42:47 np0005540825 NetworkManager[48963]: <info>  [1764582167.4664] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51740 uid=0 result="success"
Dec  1 04:42:47 np0005540825 NetworkManager[48963]: <info>  [1764582167.7185] checkpoint[0x559906a8e950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Dec  1 04:42:47 np0005540825 NetworkManager[48963]: <info>  [1764582167.7189] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51740 uid=0 result="success"
Dec  1 04:42:47 np0005540825 python3.9[52097]: ansible-ansible.legacy.async_status Invoked with jid=j260373114879.51734 mode=status _async_dir=/root/.ansible_async
Dec  1 04:42:48 np0005540825 NetworkManager[48963]: <info>  [1764582168.0908] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51740 uid=0 result="success"
Dec  1 04:42:48 np0005540825 NetworkManager[48963]: <info>  [1764582168.0920] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51740 uid=0 result="success"
Dec  1 04:42:48 np0005540825 NetworkManager[48963]: <info>  [1764582168.3795] audit: op="networking-control" arg="global-dns-configuration" pid=51740 uid=0 result="success"
Dec  1 04:42:48 np0005540825 NetworkManager[48963]: <info>  [1764582168.3825] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Dec  1 04:42:48 np0005540825 NetworkManager[48963]: <info>  [1764582168.3849] audit: op="networking-control" arg="global-dns-configuration" pid=51740 uid=0 result="success"
Dec  1 04:42:48 np0005540825 NetworkManager[48963]: <info>  [1764582168.3873] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51740 uid=0 result="success"
Dec  1 04:42:48 np0005540825 NetworkManager[48963]: <info>  [1764582168.6023] checkpoint[0x559906a8ea20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Dec  1 04:42:48 np0005540825 NetworkManager[48963]: <info>  [1764582168.6027] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51740 uid=0 result="success"
Dec  1 04:42:48 np0005540825 ansible-async_wrapper.py[51738]: Module complete (51738)
Dec  1 04:42:49 np0005540825 ansible-async_wrapper.py[51737]: Done in kid B.
Dec  1 04:42:51 np0005540825 python3.9[52204]: ansible-ansible.legacy.async_status Invoked with jid=j260373114879.51734 mode=status _async_dir=/root/.ansible_async
Dec  1 04:42:51 np0005540825 python3.9[52303]: ansible-ansible.legacy.async_status Invoked with jid=j260373114879.51734 mode=cleanup _async_dir=/root/.ansible_async
Dec  1 04:42:52 np0005540825 python3.9[52455]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:42:53 np0005540825 python3.9[52578]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764582172.1976767-926-9880617990356/.source.returncode _original_basename=.20t5i3jb follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:42:54 np0005540825 python3.9[52730]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:42:54 np0005540825 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  1 04:42:54 np0005540825 python3.9[52857]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764582173.5559785-974-51122045407026/.source.cfg _original_basename=.2a0atz8c follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:42:55 np0005540825 python3.9[53009]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 04:42:55 np0005540825 systemd[1]: Reloading Network Manager...
Dec  1 04:42:55 np0005540825 NetworkManager[48963]: <info>  [1764582175.8518] audit: op="reload" arg="0" pid=53013 uid=0 result="success"
Dec  1 04:42:55 np0005540825 NetworkManager[48963]: <info>  [1764582175.8528] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Dec  1 04:42:55 np0005540825 systemd[1]: Reloaded Network Manager.
Dec  1 04:42:56 np0005540825 systemd[1]: session-10.scope: Deactivated successfully.
Dec  1 04:42:56 np0005540825 systemd[1]: session-10.scope: Consumed 53.261s CPU time.
Dec  1 04:42:56 np0005540825 systemd-logind[789]: Session 10 logged out. Waiting for processes to exit.
Dec  1 04:42:56 np0005540825 systemd-logind[789]: Removed session 10.
Dec  1 04:43:01 np0005540825 systemd-logind[789]: New session 11 of user zuul.
Dec  1 04:43:01 np0005540825 systemd[1]: Started Session 11 of User zuul.
Dec  1 04:43:02 np0005540825 python3.9[53199]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:43:04 np0005540825 python3.9[53353]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 04:43:05 np0005540825 python3.9[53547]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:43:05 np0005540825 systemd[1]: session-11.scope: Deactivated successfully.
Dec  1 04:43:05 np0005540825 systemd[1]: session-11.scope: Consumed 2.724s CPU time.
Dec  1 04:43:05 np0005540825 systemd-logind[789]: Session 11 logged out. Waiting for processes to exit.
Dec  1 04:43:05 np0005540825 systemd-logind[789]: Removed session 11.
Dec  1 04:43:05 np0005540825 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  1 04:43:10 np0005540825 systemd-logind[789]: New session 12 of user zuul.
Dec  1 04:43:10 np0005540825 systemd[1]: Started Session 12 of User zuul.
Dec  1 04:43:12 np0005540825 python3.9[53729]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:43:13 np0005540825 python3.9[53883]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:43:14 np0005540825 python3.9[54040]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 04:43:14 np0005540825 python3.9[54124]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 04:43:17 np0005540825 python3.9[54277]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 04:43:18 np0005540825 python3.9[54473]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:43:19 np0005540825 python3.9[54625]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:43:19 np0005540825 systemd[1]: var-lib-containers-storage-overlay-compat2509030179-merged.mount: Deactivated successfully.
Dec  1 04:43:19 np0005540825 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck3014478173-merged.mount: Deactivated successfully.
Dec  1 04:43:19 np0005540825 podman[54626]: 2025-12-01 09:43:19.502376905 +0000 UTC m=+0.079247351 system refresh
Dec  1 04:43:20 np0005540825 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 04:43:20 np0005540825 python3.9[54789]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:43:21 np0005540825 python3.9[54912]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764582199.7999158-197-41727506362721/.source.json follow=False _original_basename=podman_network_config.j2 checksum=92021276eff7a1832de279649d76eabb12600519 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:43:21 np0005540825 python3.9[55064]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:43:22 np0005540825 python3.9[55187]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764582201.4370024-242-106739213483187/.source.conf follow=False _original_basename=registries.conf.j2 checksum=a92d4bce7d9cad3a31d9a297b9e21f629ee446cd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:43:23 np0005540825 python3.9[55339]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:43:24 np0005540825 python3.9[55491]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:43:24 np0005540825 python3.9[55643]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:43:25 np0005540825 python3.9[55795]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:43:26 np0005540825 python3.9[55947]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 04:43:29 np0005540825 python3.9[56100]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:43:29 np0005540825 python3.9[56254]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:43:30 np0005540825 python3.9[56406]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:43:31 np0005540825 python3.9[56558]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:43:32 np0005540825 python3.9[56711]: ansible-service_facts Invoked
Dec  1 04:43:32 np0005540825 network[56728]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  1 04:43:32 np0005540825 network[56729]: 'network-scripts' will be removed from distribution in near future.
Dec  1 04:43:32 np0005540825 network[56730]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  1 04:43:38 np0005540825 python3.9[57182]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 04:43:42 np0005540825 python3.9[57335]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec  1 04:43:43 np0005540825 python3.9[57487]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:43:44 np0005540825 python3.9[57612]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764582223.1566525-674-265905993916195/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:43:45 np0005540825 python3.9[57766]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:43:45 np0005540825 python3.9[57891]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764582224.7191226-719-234842948662489/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:43:47 np0005540825 python3.9[58045]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:43:49 np0005540825 python3.9[58199]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 04:43:50 np0005540825 python3.9[58283]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:43:52 np0005540825 python3.9[58437]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 04:43:53 np0005540825 python3.9[58521]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 04:43:53 np0005540825 systemd[1]: Stopping NTP client/server...
Dec  1 04:43:53 np0005540825 chronyd[795]: chronyd exiting
Dec  1 04:43:53 np0005540825 systemd[1]: chronyd.service: Deactivated successfully.
Dec  1 04:43:53 np0005540825 systemd[1]: Stopped NTP client/server.
Dec  1 04:43:53 np0005540825 systemd[1]: Starting NTP client/server...
Dec  1 04:43:53 np0005540825 chronyd[58529]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec  1 04:43:53 np0005540825 chronyd[58529]: Frequency -26.511 +/- 0.455 ppm read from /var/lib/chrony/drift
Dec  1 04:43:53 np0005540825 chronyd[58529]: Loaded seccomp filter (level 2)
Dec  1 04:43:53 np0005540825 systemd[1]: Started NTP client/server.
Dec  1 04:43:53 np0005540825 systemd[1]: session-12.scope: Deactivated successfully.
Dec  1 04:43:53 np0005540825 systemd[1]: session-12.scope: Consumed 29.950s CPU time.
Dec  1 04:43:53 np0005540825 systemd-logind[789]: Session 12 logged out. Waiting for processes to exit.
Dec  1 04:43:53 np0005540825 systemd-logind[789]: Removed session 12.
Dec  1 04:44:04 np0005540825 systemd-logind[789]: New session 13 of user zuul.
Dec  1 04:44:04 np0005540825 systemd[1]: Started Session 13 of User zuul.
Dec  1 04:44:05 np0005540825 python3.9[58710]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:44:06 np0005540825 python3.9[58862]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:44:07 np0005540825 python3.9[58985]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764582246.1090972-62-90339802832092/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:44:07 np0005540825 systemd[1]: session-13.scope: Deactivated successfully.
Dec  1 04:44:07 np0005540825 systemd[1]: session-13.scope: Consumed 1.930s CPU time.
Dec  1 04:44:07 np0005540825 systemd-logind[789]: Session 13 logged out. Waiting for processes to exit.
Dec  1 04:44:07 np0005540825 systemd-logind[789]: Removed session 13.
Dec  1 04:44:13 np0005540825 systemd-logind[789]: New session 14 of user zuul.
Dec  1 04:44:13 np0005540825 systemd[1]: Started Session 14 of User zuul.
Dec  1 04:44:14 np0005540825 python3.9[59163]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:44:15 np0005540825 python3.9[59319]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:44:16 np0005540825 python3.9[59494]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:44:17 np0005540825 python3.9[59617]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764582256.1932507-83-88691826685218/.source.json _original_basename=.p84l3f38 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:44:18 np0005540825 python3.9[59769]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:44:19 np0005540825 python3.9[59892]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764582258.061205-152-53762802150266/.source _original_basename=.ilx_3jb5 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:44:20 np0005540825 python3.9[60044]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:44:20 np0005540825 python3.9[60196]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:44:21 np0005540825 python3.9[60319]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764582260.2214763-224-247048755016480/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:44:21 np0005540825 python3.9[60471]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:44:22 np0005540825 python3.9[60594]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764582261.4433544-224-98683585106642/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:44:23 np0005540825 python3.9[60746]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:44:24 np0005540825 python3.9[60898]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:44:25 np0005540825 python3.9[61021]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764582263.4559007-335-201588976100875/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:44:25 np0005540825 python3.9[61173]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:44:26 np0005540825 python3.9[61296]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764582265.321096-380-230585865129370/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:44:27 np0005540825 python3.9[61448]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:44:27 np0005540825 systemd[1]: Reloading.
Dec  1 04:44:27 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:44:27 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:44:27 np0005540825 systemd[1]: Reloading.
Dec  1 04:44:27 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:44:28 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:44:28 np0005540825 systemd[1]: Starting EDPM Container Shutdown...
Dec  1 04:44:28 np0005540825 systemd[1]: Finished EDPM Container Shutdown.
Dec  1 04:44:28 np0005540825 python3.9[61674]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:44:29 np0005540825 python3.9[61797]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764582268.3305733-449-9141597337088/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:44:30 np0005540825 python3.9[61949]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:44:30 np0005540825 python3.9[62072]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764582269.6119423-494-90366603000374/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:44:31 np0005540825 python3.9[62224]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:44:31 np0005540825 systemd[1]: Reloading.
Dec  1 04:44:31 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:44:31 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:44:31 np0005540825 systemd[1]: Reloading.
Dec  1 04:44:31 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:44:31 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:44:32 np0005540825 systemd[1]: Starting Create netns directory...
Dec  1 04:44:32 np0005540825 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  1 04:44:32 np0005540825 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  1 04:44:32 np0005540825 systemd[1]: Finished Create netns directory.
Dec  1 04:44:32 np0005540825 python3.9[62451]: ansible-ansible.builtin.service_facts Invoked
Dec  1 04:44:33 np0005540825 network[62468]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  1 04:44:33 np0005540825 network[62469]: 'network-scripts' will be removed from distribution in near future.
Dec  1 04:44:33 np0005540825 network[62470]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  1 04:44:37 np0005540825 python3.9[62732]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:44:37 np0005540825 systemd[1]: Reloading.
Dec  1 04:44:37 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:44:37 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:44:37 np0005540825 systemd[1]: Stopping IPv4 firewall with iptables...
Dec  1 04:44:37 np0005540825 iptables.init[62772]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Dec  1 04:44:38 np0005540825 iptables.init[62772]: iptables: Flushing firewall rules: [  OK  ]
Dec  1 04:44:38 np0005540825 systemd[1]: iptables.service: Deactivated successfully.
Dec  1 04:44:38 np0005540825 systemd[1]: Stopped IPv4 firewall with iptables.
Dec  1 04:44:38 np0005540825 python3.9[62968]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:44:40 np0005540825 python3.9[63122]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:44:40 np0005540825 systemd[1]: Reloading.
Dec  1 04:44:40 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:44:40 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:44:40 np0005540825 systemd[1]: Starting Netfilter Tables...
Dec  1 04:44:40 np0005540825 systemd[1]: Finished Netfilter Tables.
Dec  1 04:44:41 np0005540825 python3.9[63314]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:44:42 np0005540825 python3.9[63467]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:44:43 np0005540825 python3.9[63592]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764582281.8320801-701-257872361172046/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:44:44 np0005540825 python3.9[63745]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 04:44:44 np0005540825 systemd[1]: Reloading OpenSSH server daemon...
Dec  1 04:44:44 np0005540825 systemd[1]: Reloaded OpenSSH server daemon.
Dec  1 04:44:45 np0005540825 python3.9[63901]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:44:45 np0005540825 python3.9[64053]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:44:46 np0005540825 python3.9[64176]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764582285.2766535-794-143354976187638/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:44:47 np0005540825 irqbalance[785]: Cannot change IRQ 26 affinity: Operation not permitted
Dec  1 04:44:47 np0005540825 irqbalance[785]: IRQ 26 affinity is now unmanaged
Dec  1 04:44:47 np0005540825 python3.9[64328]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec  1 04:44:47 np0005540825 systemd[1]: Starting Time & Date Service...
Dec  1 04:44:47 np0005540825 systemd[1]: Started Time & Date Service.
Dec  1 04:44:48 np0005540825 python3.9[64484]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:44:49 np0005540825 python3.9[64636]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:44:50 np0005540825 python3.9[64759]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764582288.8150861-899-109043913596483/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:44:50 np0005540825 python3.9[64911]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:44:51 np0005540825 python3.9[65034]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764582290.2207532-944-24759049010310/.source.yaml _original_basename=.e46dtnkd follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:44:52 np0005540825 python3.9[65186]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:44:52 np0005540825 python3.9[65309]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764582291.747893-989-211273309466463/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:44:53 np0005540825 python3.9[65461]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:44:54 np0005540825 python3.9[65614]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:44:55 np0005540825 python3[65767]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  1 04:44:55 np0005540825 python3.9[65919]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:44:56 np0005540825 python3.9[66042]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764582295.3687012-1106-208975823153662/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:44:57 np0005540825 python3.9[66194]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:44:57 np0005540825 python3.9[66317]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764582296.824633-1151-43711466891338/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:44:58 np0005540825 python3.9[66469]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:44:59 np0005540825 python3.9[66592]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764582298.144628-1196-229195239620436/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:44:59 np0005540825 python3.9[66744]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:45:00 np0005540825 python3.9[66869]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764582299.4792607-1241-278915363145151/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:45:01 np0005540825 python3.9[67021]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:45:01 np0005540825 python3.9[67144]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764582300.7711544-1286-69822474930474/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:45:02 np0005540825 python3.9[67296]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:45:03 np0005540825 python3.9[67448]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:45:04 np0005540825 python3.9[67607]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:45:05 np0005540825 python3.9[67760]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:45:05 np0005540825 python3.9[67912]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:45:06 np0005540825 python3.9[68064]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  1 04:45:07 np0005540825 python3.9[68217]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  1 04:45:07 np0005540825 systemd[1]: session-14.scope: Deactivated successfully.
Dec  1 04:45:07 np0005540825 systemd[1]: session-14.scope: Consumed 41.726s CPU time.
Dec  1 04:45:07 np0005540825 systemd-logind[789]: Session 14 logged out. Waiting for processes to exit.
Dec  1 04:45:07 np0005540825 systemd-logind[789]: Removed session 14.
Dec  1 04:45:13 np0005540825 systemd-logind[789]: New session 15 of user zuul.
Dec  1 04:45:13 np0005540825 systemd[1]: Started Session 15 of User zuul.
Dec  1 04:45:13 np0005540825 python3.9[68398]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec  1 04:45:14 np0005540825 python3.9[68550]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:45:15 np0005540825 python3.9[68702]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:45:16 np0005540825 python3.9[68854]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+tytlc2ziEXCaePFL6NCHfQfG5hnoDOgK+/O6WujzT2GFJESz6sgXypOXA+ry9uSM1AFkZgIIj7YfrFvtxYbWsEyzbhXKiOr8noIZGkfc+43imB+C2FgUp5ZwQSFnnxyIiXQWwKIjrOXbXE1r5SClA+FIAojDoectq/AbKwehIzD1ayHdfehF7BTfXJbkf64RgNcctGyjz0LPxY2mXC0kQXEFZSqJIOn5sys9wQEkjd4XlXA66oaJPV948m4ApJniNd9ohIVmXKAO5Bo6D4WQVvrA03w7PurWjJmpQuKNNwzAn2MMUfwfF0FiH9nxKa5/yEHRA/jTlNtqA/xOFC1uvGvgfWLDMfh+AtXxrNJXtp+qeATiUthHFK9ZRT6xaqkdd+LzySkLVyUCxpvEeOSKcHCqoxNBMZ5p9skmKbus5DRvzBSzPSGfBqh+7efuwSYYRveVZ2iqukef+cMJ5t+mlGuIAZulVVeLXhivpqH20o4d+WgBLNWpPZtP1w3vnds=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKDMbjmqVhbMiFxfeq71aiHzezH5+ve9aaRv6tecZ9yt#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD2a9/UKab06QjpszdfyP/8+Fmx0ghbxasoTU/24//g4p6oYwAMEXLcqU8YkQj66SK/B/CRmkko20tQpuvcB+LQ=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9iOYT2GM4L6SHZTMq11oZ+BAk/eXQ8XBJJYa2Eo/9VKQiuDMNzjXWKc1heeqMgloaJAk+En3hPDTZcnt14xKW0weSVhc1GuXBU3IqdQGeO3nyjdhUNxj2O6Syt/8Srh0+ne/yimC9BxBrCHKmwPPCx0TTtiy3n953HP5w0wedM8MI2bl9X4CaVwEtwSUbhFJgRaAVvg1jWUBV+tE9CGQXy1Y7raeATTLvRa3PIqU2pSDvvN44SuFWubkATb9CNZfejG2Tz2N709KveFa1tPaAjiuj046dUN+nb5eMroLvf2T2MoSQ12AUXHcpxVB6qb918qUpn8x9/V65c4fkXQ3nNgbF3IHP7RcwSs0XISdGLMT1NPTmYDhECjFDqTwkiK+goHUXZY3N3dYfjS9uqS1/66OIDlWK6niL0DMO6j+L/iriIIzPVWmrEz384bDc+wVQgGjmVXolCOWq/vp6TE1nAFqsNTZmQXC8BHCGtitnnWgzgbJX3D4O4dBOqHqdPr8=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGEIBRopLb4IdSGL1f5PVbv9932FzGHz/9YCDTQr6PvA#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDEJ0q084PIbFOMDxHa25lnKuVffDClzijZagkDx2W3Z17XxuTVNXMnebqlksv3x5cE8TQLF/PIAPJS87wX+Nuo=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDNxuYL62ECxG4tKU506Q3pIBb6yt0LTfxUgzUGORrXbIq9WrYwVeb+Lkx8v046r7H1KM8BsXHHuc+/3UYA3ldToNXUkjnpV43woAUm6zBViUE4+fgkcOJmVpRTZ/uXPMGTCGECUFZ9zuo3AFkcF0ERCcieOSdVs4uPytJLM0anMY2JZ9BHHzwlK3u+R7I452i/2bTjizB5yGGjV/5usLKdzn3gANHxbNcnVh+sI8fLZDldSAoeh+Lmihzsfp+4optdWgF0GnEgV3ui8NyR+nrPN2A09+4jC0EKzW3P8PT6CaTEgt95tkEYJ0/ihBlX210GmX32GEZfnHIOSflIiIeeAz/8vomjGlRwArfsmlOxT56Q9rekK5hD2orlFCjOvrzfoJN7vvTaE/P8ls/6015TUzbkS2WqhMLJbIvNcumWshvtYifwfnwMI2BK7YTHKpx1Qc/3anJqszHfO0G7ar3+3DemlY50qxApCrKUlE/w1rQtiN1VKmlioP2XpCmwe1s=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKm9ziDthsQekJ2ppuyoRsJLe7WplMYSfdzI6Ftkcb9s#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAnzEG8a/rCCjdE5RU3Uk/1EHo5xwDY20eWwn6aeXJMS7blUnv3gyCa8WoIefjhilEbylrojzG4Tmv2ZgeeLQd4=#012 create=True mode=0644 path=/tmp/ansible.wok8x2sy state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:45:17 np0005540825 python3.9[69006]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.wok8x2sy' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:45:17 np0005540825 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  1 04:45:18 np0005540825 python3.9[69163]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.wok8x2sy state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:45:19 np0005540825 systemd[1]: session-15.scope: Deactivated successfully.
Dec  1 04:45:19 np0005540825 systemd[1]: session-15.scope: Consumed 3.936s CPU time.
Dec  1 04:45:19 np0005540825 systemd-logind[789]: Session 15 logged out. Waiting for processes to exit.
Dec  1 04:45:19 np0005540825 systemd-logind[789]: Removed session 15.
Dec  1 04:45:25 np0005540825 systemd-logind[789]: New session 16 of user zuul.
Dec  1 04:45:25 np0005540825 systemd[1]: Started Session 16 of User zuul.
Dec  1 04:45:26 np0005540825 python3.9[69341]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:45:27 np0005540825 python3.9[69497]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec  1 04:45:29 np0005540825 python3.9[69651]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 04:45:30 np0005540825 python3.9[69804]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:45:31 np0005540825 python3.9[69957]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:45:32 np0005540825 python3.9[70111]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:45:32 np0005540825 python3.9[70266]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:45:33 np0005540825 systemd[1]: session-16.scope: Deactivated successfully.
Dec  1 04:45:33 np0005540825 systemd[1]: session-16.scope: Consumed 5.068s CPU time.
Dec  1 04:45:33 np0005540825 systemd-logind[789]: Session 16 logged out. Waiting for processes to exit.
Dec  1 04:45:33 np0005540825 systemd-logind[789]: Removed session 16.
Dec  1 04:45:38 np0005540825 systemd-logind[789]: New session 17 of user zuul.
Dec  1 04:45:38 np0005540825 systemd[1]: Started Session 17 of User zuul.
Dec  1 04:45:39 np0005540825 python3.9[70444]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:45:40 np0005540825 python3.9[70600]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 04:45:41 np0005540825 python3.9[70684]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  1 04:45:43 np0005540825 python3.9[70835]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:45:44 np0005540825 python3.9[70986]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  1 04:45:45 np0005540825 python3.9[71136]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:45:45 np0005540825 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 04:45:46 np0005540825 python3.9[71287]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:45:47 np0005540825 systemd[1]: session-17.scope: Deactivated successfully.
Dec  1 04:45:47 np0005540825 systemd[1]: session-17.scope: Consumed 6.524s CPU time.
Dec  1 04:45:47 np0005540825 systemd-logind[789]: Session 17 logged out. Waiting for processes to exit.
Dec  1 04:45:47 np0005540825 systemd-logind[789]: Removed session 17.
Dec  1 04:45:55 np0005540825 systemd-logind[789]: New session 18 of user zuul.
Dec  1 04:45:55 np0005540825 systemd[1]: Started Session 18 of User zuul.
Dec  1 04:46:01 np0005540825 python3[72054]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:46:03 np0005540825 chronyd[58529]: Selected source 149.56.19.163 (pool.ntp.org)
Dec  1 04:46:03 np0005540825 python3[72149]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  1 04:46:05 np0005540825 python3[72176]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  1 04:46:05 np0005540825 python3[72202]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:46:05 np0005540825 kernel: loop: module loaded
Dec  1 04:46:05 np0005540825 kernel: loop3: detected capacity change from 0 to 41943040
Dec  1 04:46:06 np0005540825 python3[72237]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:46:06 np0005540825 lvm[72240]: PV /dev/loop3 not used.
Dec  1 04:46:06 np0005540825 lvm[72249]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 04:46:06 np0005540825 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Dec  1 04:46:06 np0005540825 lvm[72251]:  1 logical volume(s) in volume group "ceph_vg0" now active
Dec  1 04:46:06 np0005540825 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Dec  1 04:46:07 np0005540825 python3[72329]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 04:46:07 np0005540825 python3[72402]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764582366.6447704-36890-150711431414766/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:46:08 np0005540825 python3[72452]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:46:08 np0005540825 systemd[1]: Reloading.
Dec  1 04:46:08 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:46:08 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:46:08 np0005540825 systemd[1]: Starting Ceph OSD losetup...
Dec  1 04:46:08 np0005540825 bash[72493]: /dev/loop3: [64513]:4194937 (/var/lib/ceph-osd-0.img)
Dec  1 04:46:08 np0005540825 systemd[1]: Finished Ceph OSD losetup.
Dec  1 04:46:08 np0005540825 lvm[72494]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 04:46:08 np0005540825 lvm[72494]: VG ceph_vg0 finished
Dec  1 04:46:11 np0005540825 python3[72518]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:46:13 np0005540825 python3[72611]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-squid'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  1 04:46:16 np0005540825 python3[72668]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  1 04:46:19 np0005540825 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  1 04:46:19 np0005540825 systemd[1]: Starting man-db-cache-update.service...
Dec  1 04:46:19 np0005540825 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  1 04:46:19 np0005540825 systemd[1]: Finished man-db-cache-update.service.
Dec  1 04:46:19 np0005540825 systemd[1]: run-r868fb2190fdf49e19f5f0e3eb4986040.service: Deactivated successfully.
Dec  1 04:46:20 np0005540825 python3[72783]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  1 04:46:20 np0005540825 python3[72811]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:46:20 np0005540825 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 04:46:20 np0005540825 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 04:46:21 np0005540825 python3[72874]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:46:21 np0005540825 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 04:46:21 np0005540825 python3[72900]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:46:22 np0005540825 python3[72978]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 04:46:23 np0005540825 python3[73051]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764582382.4151087-37082-38426237685114/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:46:23 np0005540825 python3[73153]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 04:46:24 np0005540825 python3[73226]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764582383.634542-37100-84902195761330/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:46:24 np0005540825 python3[73276]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  1 04:46:25 np0005540825 python3[73304]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  1 04:46:25 np0005540825 python3[73332]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  1 04:46:25 np0005540825 python3[73360]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:46:26 np0005540825 systemd[1]: Created slice User Slice of UID 42477.
Dec  1 04:46:26 np0005540825 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec  1 04:46:26 np0005540825 systemd-logind[789]: New session 19 of user ceph-admin.
Dec  1 04:46:26 np0005540825 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec  1 04:46:26 np0005540825 systemd[1]: Starting User Manager for UID 42477...
Dec  1 04:46:26 np0005540825 systemd[73368]: Queued start job for default target Main User Target.
Dec  1 04:46:26 np0005540825 systemd[73368]: Created slice User Application Slice.
Dec  1 04:46:26 np0005540825 systemd[73368]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  1 04:46:26 np0005540825 systemd[73368]: Started Daily Cleanup of User's Temporary Directories.
Dec  1 04:46:26 np0005540825 systemd[73368]: Reached target Paths.
Dec  1 04:46:26 np0005540825 systemd[73368]: Reached target Timers.
Dec  1 04:46:26 np0005540825 systemd[73368]: Starting D-Bus User Message Bus Socket...
Dec  1 04:46:26 np0005540825 systemd[73368]: Starting Create User's Volatile Files and Directories...
Dec  1 04:46:26 np0005540825 systemd[73368]: Finished Create User's Volatile Files and Directories.
Dec  1 04:46:26 np0005540825 systemd[73368]: Listening on D-Bus User Message Bus Socket.
Dec  1 04:46:26 np0005540825 systemd[73368]: Reached target Sockets.
Dec  1 04:46:26 np0005540825 systemd[73368]: Reached target Basic System.
Dec  1 04:46:26 np0005540825 systemd[73368]: Reached target Main User Target.
Dec  1 04:46:26 np0005540825 systemd[73368]: Startup finished in 125ms.
Dec  1 04:46:26 np0005540825 systemd[1]: Started User Manager for UID 42477.
Dec  1 04:46:26 np0005540825 systemd[1]: Started Session 19 of User ceph-admin.
Dec  1 04:46:26 np0005540825 systemd[1]: session-19.scope: Deactivated successfully.
Dec  1 04:46:26 np0005540825 systemd-logind[789]: Session 19 logged out. Waiting for processes to exit.
Dec  1 04:46:26 np0005540825 systemd-logind[789]: Removed session 19.
Dec  1 04:46:26 np0005540825 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 04:46:26 np0005540825 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 04:46:28 np0005540825 systemd[1]: var-lib-containers-storage-overlay-compat3809860116-lower\x2dmapped.mount: Deactivated successfully.
Dec  1 04:46:36 np0005540825 systemd[1]: Stopping User Manager for UID 42477...
Dec  1 04:46:36 np0005540825 systemd[73368]: Activating special unit Exit the Session...
Dec  1 04:46:36 np0005540825 systemd[73368]: Stopped target Main User Target.
Dec  1 04:46:36 np0005540825 systemd[73368]: Stopped target Basic System.
Dec  1 04:46:36 np0005540825 systemd[73368]: Stopped target Paths.
Dec  1 04:46:36 np0005540825 systemd[73368]: Stopped target Sockets.
Dec  1 04:46:36 np0005540825 systemd[73368]: Stopped target Timers.
Dec  1 04:46:36 np0005540825 systemd[73368]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec  1 04:46:36 np0005540825 systemd[73368]: Stopped Daily Cleanup of User's Temporary Directories.
Dec  1 04:46:36 np0005540825 systemd[73368]: Closed D-Bus User Message Bus Socket.
Dec  1 04:46:36 np0005540825 systemd[73368]: Stopped Create User's Volatile Files and Directories.
Dec  1 04:46:36 np0005540825 systemd[73368]: Removed slice User Application Slice.
Dec  1 04:46:36 np0005540825 systemd[73368]: Reached target Shutdown.
Dec  1 04:46:36 np0005540825 systemd[73368]: Finished Exit the Session.
Dec  1 04:46:36 np0005540825 systemd[73368]: Reached target Exit the Session.
Dec  1 04:46:36 np0005540825 systemd[1]: user@42477.service: Deactivated successfully.
Dec  1 04:46:36 np0005540825 systemd[1]: Stopped User Manager for UID 42477.
Dec  1 04:46:36 np0005540825 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Dec  1 04:46:36 np0005540825 systemd[1]: run-user-42477.mount: Deactivated successfully.
Dec  1 04:46:36 np0005540825 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Dec  1 04:46:36 np0005540825 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Dec  1 04:46:36 np0005540825 systemd[1]: Removed slice User Slice of UID 42477.
Dec  1 04:46:44 np0005540825 podman[73463]: 2025-12-01 09:46:44.235727799 +0000 UTC m=+17.466878357 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:46:44 np0005540825 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 04:46:44 np0005540825 podman[73525]: 2025-12-01 09:46:44.345206101 +0000 UTC m=+0.071014903 container create 26f8ed7e6d52966cc423e0d4f56dcebd62a1209e7488391af0e6f978e451f8ec (image=quay.io/ceph/ceph:v19, name=upbeat_visvesvaraya, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:46:44 np0005540825 systemd[1]: Created slice Virtual Machine and Container Slice.
Dec  1 04:46:44 np0005540825 systemd[1]: Started libpod-conmon-26f8ed7e6d52966cc423e0d4f56dcebd62a1209e7488391af0e6f978e451f8ec.scope.
Dec  1 04:46:44 np0005540825 podman[73525]: 2025-12-01 09:46:44.315110163 +0000 UTC m=+0.040919035 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:46:44 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:46:44 np0005540825 podman[73525]: 2025-12-01 09:46:44.479804159 +0000 UTC m=+0.205613031 container init 26f8ed7e6d52966cc423e0d4f56dcebd62a1209e7488391af0e6f978e451f8ec (image=quay.io/ceph/ceph:v19, name=upbeat_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec  1 04:46:44 np0005540825 podman[73525]: 2025-12-01 09:46:44.493134633 +0000 UTC m=+0.218943435 container start 26f8ed7e6d52966cc423e0d4f56dcebd62a1209e7488391af0e6f978e451f8ec (image=quay.io/ceph/ceph:v19, name=upbeat_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:46:44 np0005540825 podman[73525]: 2025-12-01 09:46:44.497049517 +0000 UTC m=+0.222858389 container attach 26f8ed7e6d52966cc423e0d4f56dcebd62a1209e7488391af0e6f978e451f8ec (image=quay.io/ceph/ceph:v19, name=upbeat_visvesvaraya, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  1 04:46:44 np0005540825 upbeat_visvesvaraya[73541]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Dec  1 04:46:44 np0005540825 systemd[1]: libpod-26f8ed7e6d52966cc423e0d4f56dcebd62a1209e7488391af0e6f978e451f8ec.scope: Deactivated successfully.
Dec  1 04:46:44 np0005540825 podman[73546]: 2025-12-01 09:46:44.656418551 +0000 UTC m=+0.025185349 container died 26f8ed7e6d52966cc423e0d4f56dcebd62a1209e7488391af0e6f978e451f8ec (image=quay.io/ceph/ceph:v19, name=upbeat_visvesvaraya, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:46:44 np0005540825 systemd[1]: var-lib-containers-storage-overlay-3540c2bd3f3930b8b7de40eff664b8fd68b2aad6d0ad375a43e6438c5c818ca3-merged.mount: Deactivated successfully.
Dec  1 04:46:44 np0005540825 podman[73546]: 2025-12-01 09:46:44.691778218 +0000 UTC m=+0.060545006 container remove 26f8ed7e6d52966cc423e0d4f56dcebd62a1209e7488391af0e6f978e451f8ec (image=quay.io/ceph/ceph:v19, name=upbeat_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec  1 04:46:44 np0005540825 systemd[1]: libpod-conmon-26f8ed7e6d52966cc423e0d4f56dcebd62a1209e7488391af0e6f978e451f8ec.scope: Deactivated successfully.
Dec  1 04:46:44 np0005540825 podman[73561]: 2025-12-01 09:46:44.755371994 +0000 UTC m=+0.035932104 container create e7f17ab0d238df5ffb07c6dc2b342413209fe8bf6ec94be21079e31bae037193 (image=quay.io/ceph/ceph:v19, name=vigilant_jackson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  1 04:46:44 np0005540825 systemd[1]: Started libpod-conmon-e7f17ab0d238df5ffb07c6dc2b342413209fe8bf6ec94be21079e31bae037193.scope.
Dec  1 04:46:44 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:46:44 np0005540825 podman[73561]: 2025-12-01 09:46:44.820686725 +0000 UTC m=+0.101246885 container init e7f17ab0d238df5ffb07c6dc2b342413209fe8bf6ec94be21079e31bae037193 (image=quay.io/ceph/ceph:v19, name=vigilant_jackson, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:46:44 np0005540825 podman[73561]: 2025-12-01 09:46:44.82578078 +0000 UTC m=+0.106340930 container start e7f17ab0d238df5ffb07c6dc2b342413209fe8bf6ec94be21079e31bae037193 (image=quay.io/ceph/ceph:v19, name=vigilant_jackson, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:46:44 np0005540825 vigilant_jackson[73577]: 167 167
Dec  1 04:46:44 np0005540825 systemd[1]: libpod-e7f17ab0d238df5ffb07c6dc2b342413209fe8bf6ec94be21079e31bae037193.scope: Deactivated successfully.
Dec  1 04:46:44 np0005540825 podman[73561]: 2025-12-01 09:46:44.829645813 +0000 UTC m=+0.110205953 container attach e7f17ab0d238df5ffb07c6dc2b342413209fe8bf6ec94be21079e31bae037193 (image=quay.io/ceph/ceph:v19, name=vigilant_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:46:44 np0005540825 podman[73561]: 2025-12-01 09:46:44.830111665 +0000 UTC m=+0.110671815 container died e7f17ab0d238df5ffb07c6dc2b342413209fe8bf6ec94be21079e31bae037193 (image=quay.io/ceph/ceph:v19, name=vigilant_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec  1 04:46:44 np0005540825 podman[73561]: 2025-12-01 09:46:44.741184048 +0000 UTC m=+0.021744168 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:46:44 np0005540825 podman[73561]: 2025-12-01 09:46:44.872480718 +0000 UTC m=+0.153040868 container remove e7f17ab0d238df5ffb07c6dc2b342413209fe8bf6ec94be21079e31bae037193 (image=quay.io/ceph/ceph:v19, name=vigilant_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:46:44 np0005540825 systemd[1]: libpod-conmon-e7f17ab0d238df5ffb07c6dc2b342413209fe8bf6ec94be21079e31bae037193.scope: Deactivated successfully.
Dec  1 04:46:44 np0005540825 podman[73594]: 2025-12-01 09:46:44.947156198 +0000 UTC m=+0.044994704 container create af0c1187e0591125d25f38b8cb97b1350783e8b4c603d6a37cec22cd17b2367d (image=quay.io/ceph/ceph:v19, name=tender_mclean, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec  1 04:46:44 np0005540825 systemd[1]: Started libpod-conmon-af0c1187e0591125d25f38b8cb97b1350783e8b4c603d6a37cec22cd17b2367d.scope.
Dec  1 04:46:45 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:46:45 np0005540825 podman[73594]: 2025-12-01 09:46:45.018446438 +0000 UTC m=+0.116284954 container init af0c1187e0591125d25f38b8cb97b1350783e8b4c603d6a37cec22cd17b2367d (image=quay.io/ceph/ceph:v19, name=tender_mclean, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:46:45 np0005540825 podman[73594]: 2025-12-01 09:46:44.92684956 +0000 UTC m=+0.024688046 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:46:45 np0005540825 podman[73594]: 2025-12-01 09:46:45.029269855 +0000 UTC m=+0.127108361 container start af0c1187e0591125d25f38b8cb97b1350783e8b4c603d6a37cec22cd17b2367d (image=quay.io/ceph/ceph:v19, name=tender_mclean, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:46:45 np0005540825 podman[73594]: 2025-12-01 09:46:45.033137067 +0000 UTC m=+0.130975543 container attach af0c1187e0591125d25f38b8cb97b1350783e8b4c603d6a37cec22cd17b2367d (image=quay.io/ceph/ceph:v19, name=tender_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:46:45 np0005540825 tender_mclean[73612]: AQAFZC1pyKTdAxAAXKxNKpxsT+DbUbc6wAoZXA==
Dec  1 04:46:45 np0005540825 systemd[1]: libpod-af0c1187e0591125d25f38b8cb97b1350783e8b4c603d6a37cec22cd17b2367d.scope: Deactivated successfully.
Dec  1 04:46:45 np0005540825 podman[73594]: 2025-12-01 09:46:45.068449163 +0000 UTC m=+0.166287659 container died af0c1187e0591125d25f38b8cb97b1350783e8b4c603d6a37cec22cd17b2367d (image=quay.io/ceph/ceph:v19, name=tender_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 04:46:45 np0005540825 podman[73594]: 2025-12-01 09:46:45.105113335 +0000 UTC m=+0.202951811 container remove af0c1187e0591125d25f38b8cb97b1350783e8b4c603d6a37cec22cd17b2367d (image=quay.io/ceph/ceph:v19, name=tender_mclean, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:46:45 np0005540825 systemd[1]: libpod-conmon-af0c1187e0591125d25f38b8cb97b1350783e8b4c603d6a37cec22cd17b2367d.scope: Deactivated successfully.
Dec  1 04:46:45 np0005540825 podman[73632]: 2025-12-01 09:46:45.170690064 +0000 UTC m=+0.047796338 container create a9969f214389a649171bb24efc2dd4408cf9efb254d007753296df6586f950bf (image=quay.io/ceph/ceph:v19, name=gifted_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:46:45 np0005540825 systemd[1]: Started libpod-conmon-a9969f214389a649171bb24efc2dd4408cf9efb254d007753296df6586f950bf.scope.
Dec  1 04:46:45 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:46:45 np0005540825 podman[73632]: 2025-12-01 09:46:45.237214707 +0000 UTC m=+0.114321071 container init a9969f214389a649171bb24efc2dd4408cf9efb254d007753296df6586f950bf (image=quay.io/ceph/ceph:v19, name=gifted_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  1 04:46:45 np0005540825 podman[73632]: 2025-12-01 09:46:45.146168234 +0000 UTC m=+0.023274598 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:46:45 np0005540825 podman[73632]: 2025-12-01 09:46:45.244417738 +0000 UTC m=+0.121524042 container start a9969f214389a649171bb24efc2dd4408cf9efb254d007753296df6586f950bf (image=quay.io/ceph/ceph:v19, name=gifted_shirley, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:46:45 np0005540825 podman[73632]: 2025-12-01 09:46:45.248667231 +0000 UTC m=+0.125773535 container attach a9969f214389a649171bb24efc2dd4408cf9efb254d007753296df6586f950bf (image=quay.io/ceph/ceph:v19, name=gifted_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:46:45 np0005540825 gifted_shirley[73648]: AQAFZC1p4FjqEBAAN/AwBIRpYOz4UGmcuHHUsQ==
Dec  1 04:46:45 np0005540825 systemd[1]: libpod-a9969f214389a649171bb24efc2dd4408cf9efb254d007753296df6586f950bf.scope: Deactivated successfully.
Dec  1 04:46:45 np0005540825 podman[73632]: 2025-12-01 09:46:45.290525461 +0000 UTC m=+0.167631775 container died a9969f214389a649171bb24efc2dd4408cf9efb254d007753296df6586f950bf (image=quay.io/ceph/ceph:v19, name=gifted_shirley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec  1 04:46:45 np0005540825 systemd[1]: var-lib-containers-storage-overlay-c2369fb8491c2811d021ec17475904a7d0e0413a5f825e9ec01bc46efde7f635-merged.mount: Deactivated successfully.
Dec  1 04:46:45 np0005540825 podman[73632]: 2025-12-01 09:46:45.333280274 +0000 UTC m=+0.210386538 container remove a9969f214389a649171bb24efc2dd4408cf9efb254d007753296df6586f950bf (image=quay.io/ceph/ceph:v19, name=gifted_shirley, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Dec  1 04:46:45 np0005540825 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 04:46:45 np0005540825 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 04:46:45 np0005540825 systemd[1]: libpod-conmon-a9969f214389a649171bb24efc2dd4408cf9efb254d007753296df6586f950bf.scope: Deactivated successfully.
Dec  1 04:46:45 np0005540825 podman[73667]: 2025-12-01 09:46:45.400960938 +0000 UTC m=+0.046733590 container create bfca69a0c7b0a457fc178ccb1fd521a493329af0a14e61897966f633fdc31ee5 (image=quay.io/ceph/ceph:v19, name=inspiring_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:46:45 np0005540825 systemd[1]: Started libpod-conmon-bfca69a0c7b0a457fc178ccb1fd521a493329af0a14e61897966f633fdc31ee5.scope.
Dec  1 04:46:45 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:46:45 np0005540825 podman[73667]: 2025-12-01 09:46:45.376571682 +0000 UTC m=+0.022344414 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:46:45 np0005540825 podman[73667]: 2025-12-01 09:46:45.695968079 +0000 UTC m=+0.341740721 container init bfca69a0c7b0a457fc178ccb1fd521a493329af0a14e61897966f633fdc31ee5 (image=quay.io/ceph/ceph:v19, name=inspiring_allen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:46:45 np0005540825 podman[73667]: 2025-12-01 09:46:45.702193654 +0000 UTC m=+0.347966286 container start bfca69a0c7b0a457fc178ccb1fd521a493329af0a14e61897966f633fdc31ee5 (image=quay.io/ceph/ceph:v19, name=inspiring_allen, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec  1 04:46:45 np0005540825 inspiring_allen[73683]: AQAFZC1pkC/yKhAAvZQhl15EhwuW3w/Of23bEg==
Dec  1 04:46:45 np0005540825 systemd[1]: libpod-bfca69a0c7b0a457fc178ccb1fd521a493329af0a14e61897966f633fdc31ee5.scope: Deactivated successfully.
Dec  1 04:46:47 np0005540825 podman[73667]: 2025-12-01 09:46:47.710564856 +0000 UTC m=+2.356337508 container attach bfca69a0c7b0a457fc178ccb1fd521a493329af0a14e61897966f633fdc31ee5 (image=quay.io/ceph/ceph:v19, name=inspiring_allen, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  1 04:46:47 np0005540825 podman[73667]: 2025-12-01 09:46:47.711193743 +0000 UTC m=+2.356966435 container died bfca69a0c7b0a457fc178ccb1fd521a493329af0a14e61897966f633fdc31ee5 (image=quay.io/ceph/ceph:v19, name=inspiring_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:46:47 np0005540825 systemd[1]: var-lib-containers-storage-overlay-4ae9934f59fff818bf38cb62ebc3c65df02085503a6d741cf41131ca51daa4a5-merged.mount: Deactivated successfully.
Dec  1 04:46:47 np0005540825 podman[73667]: 2025-12-01 09:46:47.766380236 +0000 UTC m=+2.412152888 container remove bfca69a0c7b0a457fc178ccb1fd521a493329af0a14e61897966f633fdc31ee5 (image=quay.io/ceph/ceph:v19, name=inspiring_allen, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:46:47 np0005540825 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 04:46:47 np0005540825 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 04:46:47 np0005540825 systemd[1]: libpod-conmon-bfca69a0c7b0a457fc178ccb1fd521a493329af0a14e61897966f633fdc31ee5.scope: Deactivated successfully.
Dec  1 04:46:47 np0005540825 podman[73703]: 2025-12-01 09:46:47.867681831 +0000 UTC m=+0.066011301 container create a2863746e00764458fdba1f4efdb8423e203de24240c322cdac67e38234351bd (image=quay.io/ceph/ceph:v19, name=competent_lewin, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:46:47 np0005540825 systemd[1]: Started libpod-conmon-a2863746e00764458fdba1f4efdb8423e203de24240c322cdac67e38234351bd.scope.
Dec  1 04:46:47 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:46:47 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdcd9c13e6c678d1e657e3f3605480d01c154d9e7a757e08b87803bd761ad5d9/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec  1 04:46:47 np0005540825 podman[73703]: 2025-12-01 09:46:47.841534948 +0000 UTC m=+0.039864418 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:46:47 np0005540825 podman[73703]: 2025-12-01 09:46:47.951040641 +0000 UTC m=+0.149370121 container init a2863746e00764458fdba1f4efdb8423e203de24240c322cdac67e38234351bd (image=quay.io/ceph/ceph:v19, name=competent_lewin, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:46:47 np0005540825 podman[73703]: 2025-12-01 09:46:47.967482957 +0000 UTC m=+0.165812427 container start a2863746e00764458fdba1f4efdb8423e203de24240c322cdac67e38234351bd (image=quay.io/ceph/ceph:v19, name=competent_lewin, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  1 04:46:47 np0005540825 podman[73703]: 2025-12-01 09:46:47.97175221 +0000 UTC m=+0.170081720 container attach a2863746e00764458fdba1f4efdb8423e203de24240c322cdac67e38234351bd (image=quay.io/ceph/ceph:v19, name=competent_lewin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True)
Dec  1 04:46:48 np0005540825 competent_lewin[73719]: /usr/bin/monmaptool: monmap file /tmp/monmap
Dec  1 04:46:48 np0005540825 competent_lewin[73719]: setting min_mon_release = quincy
Dec  1 04:46:48 np0005540825 competent_lewin[73719]: /usr/bin/monmaptool: set fsid to 365f19c2-81e5-5edd-b6b4-280555214d3a
Dec  1 04:46:48 np0005540825 competent_lewin[73719]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Dec  1 04:46:48 np0005540825 systemd[1]: libpod-a2863746e00764458fdba1f4efdb8423e203de24240c322cdac67e38234351bd.scope: Deactivated successfully.
Dec  1 04:46:48 np0005540825 podman[73703]: 2025-12-01 09:46:48.024451047 +0000 UTC m=+0.222780517 container died a2863746e00764458fdba1f4efdb8423e203de24240c322cdac67e38234351bd (image=quay.io/ceph/ceph:v19, name=competent_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec  1 04:46:48 np0005540825 podman[73703]: 2025-12-01 09:46:48.073476137 +0000 UTC m=+0.271805577 container remove a2863746e00764458fdba1f4efdb8423e203de24240c322cdac67e38234351bd (image=quay.io/ceph/ceph:v19, name=competent_lewin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:46:48 np0005540825 systemd[1]: libpod-conmon-a2863746e00764458fdba1f4efdb8423e203de24240c322cdac67e38234351bd.scope: Deactivated successfully.
Dec  1 04:46:48 np0005540825 podman[73739]: 2025-12-01 09:46:48.161619534 +0000 UTC m=+0.058139563 container create fda65463c3ebdac9866e3f3998d66daf78689d4f5b9054bb59a5991425f27c16 (image=quay.io/ceph/ceph:v19, name=elated_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:46:48 np0005540825 systemd[1]: Started libpod-conmon-fda65463c3ebdac9866e3f3998d66daf78689d4f5b9054bb59a5991425f27c16.scope.
Dec  1 04:46:48 np0005540825 podman[73739]: 2025-12-01 09:46:48.132951414 +0000 UTC m=+0.029471513 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:46:48 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:46:48 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bef912366cc064032d1474a85877de9f4fe618bac62610fbf4e1b2ad0fcddb3/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 04:46:48 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bef912366cc064032d1474a85877de9f4fe618bac62610fbf4e1b2ad0fcddb3/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec  1 04:46:48 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bef912366cc064032d1474a85877de9f4fe618bac62610fbf4e1b2ad0fcddb3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:46:48 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bef912366cc064032d1474a85877de9f4fe618bac62610fbf4e1b2ad0fcddb3/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  1 04:46:48 np0005540825 podman[73739]: 2025-12-01 09:46:48.267785607 +0000 UTC m=+0.164305716 container init fda65463c3ebdac9866e3f3998d66daf78689d4f5b9054bb59a5991425f27c16 (image=quay.io/ceph/ceph:v19, name=elated_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  1 04:46:48 np0005540825 podman[73739]: 2025-12-01 09:46:48.279229841 +0000 UTC m=+0.175749860 container start fda65463c3ebdac9866e3f3998d66daf78689d4f5b9054bb59a5991425f27c16 (image=quay.io/ceph/ceph:v19, name=elated_shamir, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:46:48 np0005540825 podman[73739]: 2025-12-01 09:46:48.283008161 +0000 UTC m=+0.179528250 container attach fda65463c3ebdac9866e3f3998d66daf78689d4f5b9054bb59a5991425f27c16 (image=quay.io/ceph/ceph:v19, name=elated_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  1 04:46:48 np0005540825 systemd[1]: libpod-fda65463c3ebdac9866e3f3998d66daf78689d4f5b9054bb59a5991425f27c16.scope: Deactivated successfully.
Dec  1 04:46:48 np0005540825 podman[73782]: 2025-12-01 09:46:48.453154721 +0000 UTC m=+0.045767304 container died fda65463c3ebdac9866e3f3998d66daf78689d4f5b9054bb59a5991425f27c16 (image=quay.io/ceph/ceph:v19, name=elated_shamir, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec  1 04:46:48 np0005540825 podman[73782]: 2025-12-01 09:46:48.500726702 +0000 UTC m=+0.093339225 container remove fda65463c3ebdac9866e3f3998d66daf78689d4f5b9054bb59a5991425f27c16 (image=quay.io/ceph/ceph:v19, name=elated_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:46:48 np0005540825 systemd[1]: libpod-conmon-fda65463c3ebdac9866e3f3998d66daf78689d4f5b9054bb59a5991425f27c16.scope: Deactivated successfully.
Dec  1 04:46:48 np0005540825 systemd[1]: Reloading.
Dec  1 04:46:48 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:46:48 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:46:48 np0005540825 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 04:46:48 np0005540825 systemd[1]: Reloading.
Dec  1 04:46:48 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:46:48 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:46:49 np0005540825 systemd[1]: Reached target All Ceph clusters and services.
Dec  1 04:46:49 np0005540825 systemd[1]: Reloading.
Dec  1 04:46:49 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:46:49 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:46:49 np0005540825 systemd[1]: Reached target Ceph cluster 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 04:46:49 np0005540825 systemd[1]: Reloading.
Dec  1 04:46:49 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:46:49 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:46:49 np0005540825 systemd[1]: Reloading.
Dec  1 04:46:49 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:46:49 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:46:49 np0005540825 systemd[1]: Created slice Slice /system/ceph-365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 04:46:49 np0005540825 systemd[1]: Reached target System Time Set.
Dec  1 04:46:49 np0005540825 systemd[1]: Reached target System Time Synchronized.
Dec  1 04:46:49 np0005540825 systemd[1]: Starting Ceph mon.compute-0 for 365f19c2-81e5-5edd-b6b4-280555214d3a...
Dec  1 04:46:50 np0005540825 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 04:46:50 np0005540825 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 04:46:50 np0005540825 podman[74039]: 2025-12-01 09:46:50.243070372 +0000 UTC m=+0.058546143 container create 330b98b9bf280a2a4c16da6715a3abbdcbb70d884db651c1a5fdc5f9d2ecdfa4 (image=quay.io/ceph/ceph:v19, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mon-compute-0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:46:50 np0005540825 podman[74039]: 2025-12-01 09:46:50.213422086 +0000 UTC m=+0.028897907 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:46:50 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9f1fc6369f880b56faf75368ffd6fe331aa590aa4767f1ab793dc01d827ab92/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:46:50 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9f1fc6369f880b56faf75368ffd6fe331aa590aa4767f1ab793dc01d827ab92/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:46:50 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9f1fc6369f880b56faf75368ffd6fe331aa590aa4767f1ab793dc01d827ab92/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 04:46:50 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9f1fc6369f880b56faf75368ffd6fe331aa590aa4767f1ab793dc01d827ab92/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  1 04:46:50 np0005540825 podman[74039]: 2025-12-01 09:46:50.351484326 +0000 UTC m=+0.166960087 container init 330b98b9bf280a2a4c16da6715a3abbdcbb70d884db651c1a5fdc5f9d2ecdfa4 (image=quay.io/ceph/ceph:v19, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Dec  1 04:46:50 np0005540825 podman[74039]: 2025-12-01 09:46:50.36144719 +0000 UTC m=+0.176922921 container start 330b98b9bf280a2a4c16da6715a3abbdcbb70d884db651c1a5fdc5f9d2ecdfa4 (image=quay.io/ceph/ceph:v19, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mon-compute-0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:46:50 np0005540825 bash[74039]: 330b98b9bf280a2a4c16da6715a3abbdcbb70d884db651c1a5fdc5f9d2ecdfa4
Dec  1 04:46:50 np0005540825 systemd[1]: Started Ceph mon.compute-0 for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: set uid:gid to 167:167 (ceph:ceph)
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: pidfile_write: ignore empty --pid-file
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: load: jerasure load: lrc 
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: RocksDB version: 7.9.2
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: Git sha 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: Compile date 2025-07-17 03:12:14
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: DB SUMMARY
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: DB Session ID:  81CRLXM68BNWCSYJH7E1
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: CURRENT file:  CURRENT
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: IDENTITY file:  IDENTITY
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                         Options.error_if_exists: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                       Options.create_if_missing: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                         Options.paranoid_checks: 1
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                                     Options.env: 0x563b445a7c20
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                                      Options.fs: PosixFileSystem
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                                Options.info_log: 0x563b45ee4d60
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                Options.max_file_opening_threads: 16
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                              Options.statistics: (nil)
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                               Options.use_fsync: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                       Options.max_log_file_size: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                         Options.allow_fallocate: 1
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                        Options.use_direct_reads: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:          Options.create_missing_column_families: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                              Options.db_log_dir: 
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                                 Options.wal_dir: 
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                   Options.advise_random_on_open: 1
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                    Options.write_buffer_manager: 0x563b45ee9900
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                            Options.rate_limiter: (nil)
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                  Options.unordered_write: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                               Options.row_cache: None
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                              Options.wal_filter: None
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:             Options.allow_ingest_behind: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:             Options.two_write_queues: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:             Options.manual_wal_flush: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:             Options.wal_compression: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:             Options.atomic_flush: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                 Options.log_readahead_size: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:             Options.allow_data_in_errors: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:             Options.db_host_id: __hostname__
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:             Options.max_background_jobs: 2
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:             Options.max_background_compactions: -1
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:             Options.max_subcompactions: 1
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:             Options.max_total_wal_size: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                          Options.max_open_files: -1
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                          Options.bytes_per_sync: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:       Options.compaction_readahead_size: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                  Options.max_background_flushes: -1
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: Compression algorithms supported:
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: #011kZSTD supported: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: #011kXpressCompression supported: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: #011kBZip2Compression supported: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: #011kLZ4Compression supported: 1
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: #011kZlibCompression supported: 1
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: #011kSnappyCompression supported: 1
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:           Options.merge_operator: 
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:        Options.compaction_filter: None
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:        Options.compaction_filter_factory: None
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:  Options.sst_partitioner_factory: None
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b45ee4500)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563b45f09350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:        Options.write_buffer_size: 33554432
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:  Options.max_write_buffer_number: 2
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:          Options.compression: NoCompression
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:       Options.prefix_extractor: nullptr
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:             Options.num_levels: 7
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                  Options.compression_opts.level: 32767
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:               Options.compression_opts.strategy: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                  Options.compression_opts.enabled: false
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                        Options.arena_block_size: 1048576
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                Options.disable_auto_compactions: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                   Options.inplace_update_support: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                           Options.bloom_locality: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                    Options.max_successive_merges: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                Options.paranoid_file_checks: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                Options.force_consistency_checks: 1
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                Options.report_bg_io_stats: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                               Options.ttl: 2592000
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                       Options.enable_blob_files: false
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                           Options.min_blob_size: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                          Options.blob_file_size: 268435456
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb:                Options.blob_file_starting_level: 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 23cec031-3abb-406f-b210-f97462e45ae8
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764582410434994, "job": 1, "event": "recovery_started", "wal_files": [4]}
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764582410437260, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764582410, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "81CRLXM68BNWCSYJH7E1", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764582410437427, "job": 1, "event": "recovery_finished"}
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x563b45f0ae00
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: DB pointer 0x563b46014000
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.12 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.12 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563b45f09350#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1,0.95 KB,0.000181794%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 365f19c2-81e5-5edd-b6b4-280555214d3a
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: mon.compute-0@-1(???) e0 preinit fsid 365f19c2-81e5-5edd-b6b4-280555214d3a
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: mon.compute-0@0(probing) e0 win_standalone_election
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: mon.compute-0@0(probing) e1 win_standalone_election
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: paxos.0).electionLogic(2) init, last seen epoch 2
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: log_channel(cluster) log [DBG] : monmap epoch 1
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: log_channel(cluster) log [DBG] : fsid 365f19c2-81e5-5edd-b6b4-280555214d3a
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: log_channel(cluster) log [DBG] : last_changed 2025-12-01T09:46:48.019470+0000
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: log_channel(cluster) log [DBG] : created 2025-12-01T09:46:48.019470+0000
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v19,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864324,os=Linux}
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: mon.compute-0@0(leader).mds e1 new map
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: mon.compute-0@0(leader).mds e1 print_map#012e1#012btime 2025-12-01T09:46:50:475394+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: log_channel(cluster) log [DBG] : fsmap 
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: mkfs 365f19c2-81e5-5edd-b6b4-280555214d3a
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  1 04:46:50 np0005540825 podman[74060]: 2025-12-01 09:46:50.489439113 +0000 UTC m=+0.075718778 container create 7598b9b53d010ebc17be45986cbc000768cbe86d43d47e944b3d944b29dc0ecc (image=quay.io/ceph/ceph:v19, name=eloquent_lederberg, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  1 04:46:50 np0005540825 systemd[1]: Started libpod-conmon-7598b9b53d010ebc17be45986cbc000768cbe86d43d47e944b3d944b29dc0ecc.scope.
Dec  1 04:46:50 np0005540825 podman[74060]: 2025-12-01 09:46:50.459653624 +0000 UTC m=+0.045933289 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:46:50 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:46:50 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ebccac7b185ef430bfaa455fd0b3a496176b707515f861a7bf220b5dd95242b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 04:46:50 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ebccac7b185ef430bfaa455fd0b3a496176b707515f861a7bf220b5dd95242b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:46:50 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ebccac7b185ef430bfaa455fd0b3a496176b707515f861a7bf220b5dd95242b/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  1 04:46:50 np0005540825 podman[74060]: 2025-12-01 09:46:50.598206997 +0000 UTC m=+0.184486702 container init 7598b9b53d010ebc17be45986cbc000768cbe86d43d47e944b3d944b29dc0ecc (image=quay.io/ceph/ceph:v19, name=eloquent_lederberg, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  1 04:46:50 np0005540825 podman[74060]: 2025-12-01 09:46:50.614128809 +0000 UTC m=+0.200408464 container start 7598b9b53d010ebc17be45986cbc000768cbe86d43d47e944b3d944b29dc0ecc (image=quay.io/ceph/ceph:v19, name=eloquent_lederberg, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec  1 04:46:50 np0005540825 podman[74060]: 2025-12-01 09:46:50.618940926 +0000 UTC m=+0.205220581 container attach 7598b9b53d010ebc17be45986cbc000768cbe86d43d47e944b3d944b29dc0ecc (image=quay.io/ceph/ceph:v19, name=eloquent_lederberg, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Dec  1 04:46:50 np0005540825 ceph-mon[74059]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3971968725' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec  1 04:46:50 np0005540825 eloquent_lederberg[74115]:  cluster:
Dec  1 04:46:50 np0005540825 eloquent_lederberg[74115]:    id:     365f19c2-81e5-5edd-b6b4-280555214d3a
Dec  1 04:46:50 np0005540825 eloquent_lederberg[74115]:    health: HEALTH_OK
Dec  1 04:46:50 np0005540825 eloquent_lederberg[74115]: 
Dec  1 04:46:50 np0005540825 eloquent_lederberg[74115]:  services:
Dec  1 04:46:50 np0005540825 eloquent_lederberg[74115]:    mon: 1 daemons, quorum compute-0 (age 0.363579s)
Dec  1 04:46:50 np0005540825 eloquent_lederberg[74115]:    mgr: no daemons active
Dec  1 04:46:50 np0005540825 eloquent_lederberg[74115]:    osd: 0 osds: 0 up, 0 in
Dec  1 04:46:50 np0005540825 eloquent_lederberg[74115]: 
Dec  1 04:46:50 np0005540825 eloquent_lederberg[74115]:  data:
Dec  1 04:46:50 np0005540825 eloquent_lederberg[74115]:    pools:   0 pools, 0 pgs
Dec  1 04:46:50 np0005540825 eloquent_lederberg[74115]:    objects: 0 objects, 0 B
Dec  1 04:46:50 np0005540825 eloquent_lederberg[74115]:    usage:   0 B used, 0 B / 0 B avail
Dec  1 04:46:50 np0005540825 eloquent_lederberg[74115]:    pgs:     
Dec  1 04:46:50 np0005540825 eloquent_lederberg[74115]: 
Dec  1 04:46:50 np0005540825 systemd[1]: libpod-7598b9b53d010ebc17be45986cbc000768cbe86d43d47e944b3d944b29dc0ecc.scope: Deactivated successfully.
Dec  1 04:46:50 np0005540825 podman[74060]: 2025-12-01 09:46:50.854137742 +0000 UTC m=+0.440417377 container died 7598b9b53d010ebc17be45986cbc000768cbe86d43d47e944b3d944b29dc0ecc (image=quay.io/ceph/ceph:v19, name=eloquent_lederberg, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  1 04:46:50 np0005540825 podman[74060]: 2025-12-01 09:46:50.891959164 +0000 UTC m=+0.478238789 container remove 7598b9b53d010ebc17be45986cbc000768cbe86d43d47e944b3d944b29dc0ecc (image=quay.io/ceph/ceph:v19, name=eloquent_lederberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:46:50 np0005540825 systemd[1]: libpod-conmon-7598b9b53d010ebc17be45986cbc000768cbe86d43d47e944b3d944b29dc0ecc.scope: Deactivated successfully.
Dec  1 04:46:50 np0005540825 podman[74152]: 2025-12-01 09:46:50.995181501 +0000 UTC m=+0.071430495 container create 49ffe2ccb814ddce67c8bcf312a87aca31546295feeaa7a35ac7c77665e841c6 (image=quay.io/ceph/ceph:v19, name=trusting_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True)
Dec  1 04:46:51 np0005540825 systemd[1]: Started libpod-conmon-49ffe2ccb814ddce67c8bcf312a87aca31546295feeaa7a35ac7c77665e841c6.scope.
Dec  1 04:46:51 np0005540825 podman[74152]: 2025-12-01 09:46:50.964531948 +0000 UTC m=+0.040781002 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:46:51 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:46:51 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4010f3e456fedcdb95cfeb4f530a3117b58bf31ad529942e053f559f9fd6805b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:46:51 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4010f3e456fedcdb95cfeb4f530a3117b58bf31ad529942e053f559f9fd6805b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 04:46:51 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4010f3e456fedcdb95cfeb4f530a3117b58bf31ad529942e053f559f9fd6805b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:46:51 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4010f3e456fedcdb95cfeb4f530a3117b58bf31ad529942e053f559f9fd6805b/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  1 04:46:51 np0005540825 podman[74152]: 2025-12-01 09:46:51.101397116 +0000 UTC m=+0.177646150 container init 49ffe2ccb814ddce67c8bcf312a87aca31546295feeaa7a35ac7c77665e841c6 (image=quay.io/ceph/ceph:v19, name=trusting_beaver, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:46:51 np0005540825 podman[74152]: 2025-12-01 09:46:51.114026661 +0000 UTC m=+0.190275665 container start 49ffe2ccb814ddce67c8bcf312a87aca31546295feeaa7a35ac7c77665e841c6 (image=quay.io/ceph/ceph:v19, name=trusting_beaver, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:46:51 np0005540825 podman[74152]: 2025-12-01 09:46:51.118735616 +0000 UTC m=+0.194984610 container attach 49ffe2ccb814ddce67c8bcf312a87aca31546295feeaa7a35ac7c77665e841c6 (image=quay.io/ceph/ceph:v19, name=trusting_beaver, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec  1 04:46:51 np0005540825 ceph-mon[74059]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec  1 04:46:51 np0005540825 ceph-mon[74059]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2639225827' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  1 04:46:51 np0005540825 ceph-mon[74059]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2639225827' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec  1 04:46:51 np0005540825 trusting_beaver[74169]: 
Dec  1 04:46:51 np0005540825 trusting_beaver[74169]: [global]
Dec  1 04:46:51 np0005540825 trusting_beaver[74169]: #011fsid = 365f19c2-81e5-5edd-b6b4-280555214d3a
Dec  1 04:46:51 np0005540825 trusting_beaver[74169]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec  1 04:46:51 np0005540825 systemd[1]: libpod-49ffe2ccb814ddce67c8bcf312a87aca31546295feeaa7a35ac7c77665e841c6.scope: Deactivated successfully.
Dec  1 04:46:51 np0005540825 podman[74196]: 2025-12-01 09:46:51.415705809 +0000 UTC m=+0.039298773 container died 49ffe2ccb814ddce67c8bcf312a87aca31546295feeaa7a35ac7c77665e841c6 (image=quay.io/ceph/ceph:v19, name=trusting_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 04:46:51 np0005540825 systemd[1]: var-lib-containers-storage-overlay-4010f3e456fedcdb95cfeb4f530a3117b58bf31ad529942e053f559f9fd6805b-merged.mount: Deactivated successfully.
Dec  1 04:46:51 np0005540825 podman[74196]: 2025-12-01 09:46:51.453167382 +0000 UTC m=+0.076760276 container remove 49ffe2ccb814ddce67c8bcf312a87aca31546295feeaa7a35ac7c77665e841c6 (image=quay.io/ceph/ceph:v19, name=trusting_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 04:46:51 np0005540825 systemd[1]: libpod-conmon-49ffe2ccb814ddce67c8bcf312a87aca31546295feeaa7a35ac7c77665e841c6.scope: Deactivated successfully.
Dec  1 04:46:51 np0005540825 ceph-mon[74059]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  1 04:46:51 np0005540825 ceph-mon[74059]: from='client.? 192.168.122.100:0/2639225827' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  1 04:46:51 np0005540825 ceph-mon[74059]: from='client.? 192.168.122.100:0/2639225827' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec  1 04:46:51 np0005540825 podman[74211]: 2025-12-01 09:46:51.545349026 +0000 UTC m=+0.056678554 container create 7efb9db7ca971ce5b5bd423fdab751e99c9b85f2f4201af485fc9f682df06cd1 (image=quay.io/ceph/ceph:v19, name=eager_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  1 04:46:51 np0005540825 systemd[1]: Started libpod-conmon-7efb9db7ca971ce5b5bd423fdab751e99c9b85f2f4201af485fc9f682df06cd1.scope.
Dec  1 04:46:51 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:46:51 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cfef0957ee12468a831e11bc653c927100e089ea3b7473a45c6703132e290ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:46:51 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cfef0957ee12468a831e11bc653c927100e089ea3b7473a45c6703132e290ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:46:51 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cfef0957ee12468a831e11bc653c927100e089ea3b7473a45c6703132e290ba/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 04:46:51 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cfef0957ee12468a831e11bc653c927100e089ea3b7473a45c6703132e290ba/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  1 04:46:51 np0005540825 podman[74211]: 2025-12-01 09:46:51.516915662 +0000 UTC m=+0.028245240 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:46:51 np0005540825 podman[74211]: 2025-12-01 09:46:51.625711766 +0000 UTC m=+0.137041274 container init 7efb9db7ca971ce5b5bd423fdab751e99c9b85f2f4201af485fc9f682df06cd1 (image=quay.io/ceph/ceph:v19, name=eager_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:46:51 np0005540825 podman[74211]: 2025-12-01 09:46:51.639144752 +0000 UTC m=+0.150474260 container start 7efb9db7ca971ce5b5bd423fdab751e99c9b85f2f4201af485fc9f682df06cd1 (image=quay.io/ceph/ceph:v19, name=eager_raman, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  1 04:46:51 np0005540825 podman[74211]: 2025-12-01 09:46:51.643719023 +0000 UTC m=+0.155048541 container attach 7efb9db7ca971ce5b5bd423fdab751e99c9b85f2f4201af485fc9f682df06cd1 (image=quay.io/ceph/ceph:v19, name=eager_raman, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True)
Dec  1 04:46:51 np0005540825 ceph-mon[74059]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:46:51 np0005540825 ceph-mon[74059]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2489338829' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:46:51 np0005540825 systemd[1]: libpod-7efb9db7ca971ce5b5bd423fdab751e99c9b85f2f4201af485fc9f682df06cd1.scope: Deactivated successfully.
Dec  1 04:46:51 np0005540825 podman[74211]: 2025-12-01 09:46:51.897714786 +0000 UTC m=+0.409044274 container died 7efb9db7ca971ce5b5bd423fdab751e99c9b85f2f4201af485fc9f682df06cd1 (image=quay.io/ceph/ceph:v19, name=eager_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  1 04:46:51 np0005540825 systemd[1]: var-lib-containers-storage-overlay-0cfef0957ee12468a831e11bc653c927100e089ea3b7473a45c6703132e290ba-merged.mount: Deactivated successfully.
Dec  1 04:46:51 np0005540825 podman[74211]: 2025-12-01 09:46:51.93935238 +0000 UTC m=+0.450681898 container remove 7efb9db7ca971ce5b5bd423fdab751e99c9b85f2f4201af485fc9f682df06cd1 (image=quay.io/ceph/ceph:v19, name=eager_raman, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:46:51 np0005540825 systemd[1]: libpod-conmon-7efb9db7ca971ce5b5bd423fdab751e99c9b85f2f4201af485fc9f682df06cd1.scope: Deactivated successfully.
Dec  1 04:46:51 np0005540825 systemd[1]: Stopping Ceph mon.compute-0 for 365f19c2-81e5-5edd-b6b4-280555214d3a...
Dec  1 04:46:52 np0005540825 ceph-mon[74059]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74059]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec  1 04:46:52 np0005540825 ceph-mon[74059]: mon.compute-0@0(leader) e1 shutdown
Dec  1 04:46:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mon-compute-0[74055]: 2025-12-01T09:46:52.226+0000 7faac28c1640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec  1 04:46:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mon-compute-0[74055]: 2025-12-01T09:46:52.226+0000 7faac28c1640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec  1 04:46:52 np0005540825 ceph-mon[74059]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec  1 04:46:52 np0005540825 ceph-mon[74059]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec  1 04:46:52 np0005540825 podman[74295]: 2025-12-01 09:46:52.258455499 +0000 UTC m=+0.090600753 container died 330b98b9bf280a2a4c16da6715a3abbdcbb70d884db651c1a5fdc5f9d2ecdfa4 (image=quay.io/ceph/ceph:v19, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec  1 04:46:52 np0005540825 systemd[1]: var-lib-containers-storage-overlay-b9f1fc6369f880b56faf75368ffd6fe331aa590aa4767f1ab793dc01d827ab92-merged.mount: Deactivated successfully.
Dec  1 04:46:52 np0005540825 podman[74295]: 2025-12-01 09:46:52.304351426 +0000 UTC m=+0.136496670 container remove 330b98b9bf280a2a4c16da6715a3abbdcbb70d884db651c1a5fdc5f9d2ecdfa4 (image=quay.io/ceph/ceph:v19, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  1 04:46:52 np0005540825 bash[74295]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mon-compute-0
Dec  1 04:46:52 np0005540825 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 04:46:52 np0005540825 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 04:46:52 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@mon.compute-0.service: Deactivated successfully.
Dec  1 04:46:52 np0005540825 systemd[1]: Stopped Ceph mon.compute-0 for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 04:46:52 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@mon.compute-0.service: Consumed 1.243s CPU time.
Dec  1 04:46:52 np0005540825 systemd[1]: Starting Ceph mon.compute-0 for 365f19c2-81e5-5edd-b6b4-280555214d3a...
Dec  1 04:46:52 np0005540825 podman[74397]: 2025-12-01 09:46:52.748624904 +0000 UTC m=+0.054387473 container create 04e54403a63b389bbbec1024baf07fe083822adf8debfd9c927a52a06f70b8a1 (image=quay.io/ceph/ceph:v19, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec  1 04:46:52 np0005540825 podman[74397]: 2025-12-01 09:46:52.719751868 +0000 UTC m=+0.025514517 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:46:52 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26667e58dfbc5189a844dc38736b3e8e0a63c23b7dc6215b6776d84dc6d59d62/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:46:52 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26667e58dfbc5189a844dc38736b3e8e0a63c23b7dc6215b6776d84dc6d59d62/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:46:52 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26667e58dfbc5189a844dc38736b3e8e0a63c23b7dc6215b6776d84dc6d59d62/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 04:46:52 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26667e58dfbc5189a844dc38736b3e8e0a63c23b7dc6215b6776d84dc6d59d62/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  1 04:46:52 np0005540825 podman[74397]: 2025-12-01 09:46:52.842580774 +0000 UTC m=+0.148343363 container init 04e54403a63b389bbbec1024baf07fe083822adf8debfd9c927a52a06f70b8a1 (image=quay.io/ceph/ceph:v19, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mon-compute-0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:46:52 np0005540825 podman[74397]: 2025-12-01 09:46:52.856346189 +0000 UTC m=+0.162108758 container start 04e54403a63b389bbbec1024baf07fe083822adf8debfd9c927a52a06f70b8a1 (image=quay.io/ceph/ceph:v19, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:46:52 np0005540825 bash[74397]: 04e54403a63b389bbbec1024baf07fe083822adf8debfd9c927a52a06f70b8a1
Dec  1 04:46:52 np0005540825 systemd[1]: Started Ceph mon.compute-0 for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: set uid:gid to 167:167 (ceph:ceph)
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: pidfile_write: ignore empty --pid-file
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: load: jerasure load: lrc 
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: RocksDB version: 7.9.2
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: Git sha 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: Compile date 2025-07-17 03:12:14
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: DB SUMMARY
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: DB Session ID:  WQRU59OV9V8EC0IMYNIX
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: CURRENT file:  CURRENT
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: IDENTITY file:  IDENTITY
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 58743 ; 
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                         Options.error_if_exists: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                       Options.create_if_missing: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                         Options.paranoid_checks: 1
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                                     Options.env: 0x56396f0aac20
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                                      Options.fs: PosixFileSystem
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                                Options.info_log: 0x563970105ac0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                Options.max_file_opening_threads: 16
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                              Options.statistics: (nil)
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                               Options.use_fsync: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                       Options.max_log_file_size: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                         Options.allow_fallocate: 1
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                        Options.use_direct_reads: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:          Options.create_missing_column_families: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                              Options.db_log_dir: 
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                                 Options.wal_dir: 
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                   Options.advise_random_on_open: 1
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                    Options.write_buffer_manager: 0x563970109900
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                            Options.rate_limiter: (nil)
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                  Options.unordered_write: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                               Options.row_cache: None
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                              Options.wal_filter: None
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:             Options.allow_ingest_behind: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:             Options.two_write_queues: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:             Options.manual_wal_flush: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:             Options.wal_compression: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:             Options.atomic_flush: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                 Options.log_readahead_size: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:             Options.allow_data_in_errors: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:             Options.db_host_id: __hostname__
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:             Options.max_background_jobs: 2
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:             Options.max_background_compactions: -1
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:             Options.max_subcompactions: 1
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:             Options.max_total_wal_size: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                          Options.max_open_files: -1
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                          Options.bytes_per_sync: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:       Options.compaction_readahead_size: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                  Options.max_background_flushes: -1
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: Compression algorithms supported:
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: #011kZSTD supported: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: #011kXpressCompression supported: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: #011kBZip2Compression supported: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: #011kLZ4Compression supported: 1
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: #011kZlibCompression supported: 1
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: #011kSnappyCompression supported: 1
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:           Options.merge_operator: 
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:        Options.compaction_filter: None
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:        Options.compaction_filter_factory: None
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:  Options.sst_partitioner_factory: None
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563970104aa0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563970129350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:        Options.write_buffer_size: 33554432
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:  Options.max_write_buffer_number: 2
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:          Options.compression: NoCompression
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:       Options.prefix_extractor: nullptr
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:             Options.num_levels: 7
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                  Options.compression_opts.level: 32767
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:               Options.compression_opts.strategy: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                  Options.compression_opts.enabled: false
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                        Options.arena_block_size: 1048576
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                Options.disable_auto_compactions: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                   Options.inplace_update_support: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                           Options.bloom_locality: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                    Options.max_successive_merges: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                Options.paranoid_file_checks: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                Options.force_consistency_checks: 1
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                Options.report_bg_io_stats: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                               Options.ttl: 2592000
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                       Options.enable_blob_files: false
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                           Options.min_blob_size: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                          Options.blob_file_size: 268435456
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb:                Options.blob_file_starting_level: 0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 23cec031-3abb-406f-b210-f97462e45ae8
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764582412939043, "job": 1, "event": "recovery_started", "wal_files": [9]}
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764582412944236, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 58494, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 137, "table_properties": {"data_size": 56968, "index_size": 168, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3182, "raw_average_key_size": 30, "raw_value_size": 54485, "raw_average_value_size": 523, "num_data_blocks": 9, "num_entries": 104, "num_filter_entries": 104, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764582412, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764582412944502, "job": 1, "event": "recovery_finished"}
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Dec  1 04:46:52 np0005540825 podman[74417]: 2025-12-01 09:46:52.951364848 +0000 UTC m=+0.058378498 container create dcfbd8f6c6abaf61f6fbd3a75d51cf3d248a64d3468992fd551c5ef210b9d9f1 (image=quay.io/ceph/ceph:v19, name=hardcore_chandrasekhar, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56397012ae00
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: DB pointer 0x563970234000
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   59.02 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0   59.02 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 2.41 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 2.41 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563970129350#2 capacity: 512.00 MB usage: 1.80 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(2,0.95 KB,0.000181794%)#012#012** File Read Latency Histogram By Level [default] **
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 365f19c2-81e5-5edd-b6b4-280555214d3a
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: mon.compute-0@-1(???) e1 preinit fsid 365f19c2-81e5-5edd-b6b4-280555214d3a
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: mon.compute-0@-1(???).mds e1 new map
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: mon.compute-0@-1(???).mds e1 print_map#012e1#012btime 2025-12-01T09:46:50:475394+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: mon.compute-0@0(probing) e1 win_standalone_election
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : monmap epoch 1
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : fsid 365f19c2-81e5-5edd-b6b4-280555214d3a
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : last_changed 2025-12-01T09:46:48.019470+0000
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : created 2025-12-01T09:46:48.019470+0000
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : fsmap 
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec  1 04:46:52 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec  1 04:46:53 np0005540825 podman[74417]: 2025-12-01 09:46:52.921019884 +0000 UTC m=+0.028033594 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:46:53 np0005540825 systemd[1]: Started libpod-conmon-dcfbd8f6c6abaf61f6fbd3a75d51cf3d248a64d3468992fd551c5ef210b9d9f1.scope.
Dec  1 04:46:53 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:46:53 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc3952d06b84e65bb487e9c7c81480ae35efa1dcd8609eff20e96480662bd1ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:46:53 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc3952d06b84e65bb487e9c7c81480ae35efa1dcd8609eff20e96480662bd1ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:46:53 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc3952d06b84e65bb487e9c7c81480ae35efa1dcd8609eff20e96480662bd1ca/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 04:46:53 np0005540825 podman[74417]: 2025-12-01 09:46:53.083197693 +0000 UTC m=+0.190211333 container init dcfbd8f6c6abaf61f6fbd3a75d51cf3d248a64d3468992fd551c5ef210b9d9f1 (image=quay.io/ceph/ceph:v19, name=hardcore_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  1 04:46:53 np0005540825 podman[74417]: 2025-12-01 09:46:53.094706408 +0000 UTC m=+0.201720028 container start dcfbd8f6c6abaf61f6fbd3a75d51cf3d248a64d3468992fd551c5ef210b9d9f1 (image=quay.io/ceph/ceph:v19, name=hardcore_chandrasekhar, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  1 04:46:53 np0005540825 podman[74417]: 2025-12-01 09:46:53.098091538 +0000 UTC m=+0.205105168 container attach dcfbd8f6c6abaf61f6fbd3a75d51cf3d248a64d3468992fd551c5ef210b9d9f1 (image=quay.io/ceph/ceph:v19, name=hardcore_chandrasekhar, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  1 04:46:53 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Dec  1 04:46:53 np0005540825 systemd[1]: libpod-dcfbd8f6c6abaf61f6fbd3a75d51cf3d248a64d3468992fd551c5ef210b9d9f1.scope: Deactivated successfully.
Dec  1 04:46:53 np0005540825 podman[74417]: 2025-12-01 09:46:53.34828065 +0000 UTC m=+0.455294260 container died dcfbd8f6c6abaf61f6fbd3a75d51cf3d248a64d3468992fd551c5ef210b9d9f1 (image=quay.io/ceph/ceph:v19, name=hardcore_chandrasekhar, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  1 04:46:53 np0005540825 systemd[1]: var-lib-containers-storage-overlay-fc3952d06b84e65bb487e9c7c81480ae35efa1dcd8609eff20e96480662bd1ca-merged.mount: Deactivated successfully.
Dec  1 04:46:53 np0005540825 podman[74417]: 2025-12-01 09:46:53.39466731 +0000 UTC m=+0.501680920 container remove dcfbd8f6c6abaf61f6fbd3a75d51cf3d248a64d3468992fd551c5ef210b9d9f1 (image=quay.io/ceph/ceph:v19, name=hardcore_chandrasekhar, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:46:53 np0005540825 systemd[1]: libpod-conmon-dcfbd8f6c6abaf61f6fbd3a75d51cf3d248a64d3468992fd551c5ef210b9d9f1.scope: Deactivated successfully.
Dec  1 04:46:53 np0005540825 podman[74509]: 2025-12-01 09:46:53.491285392 +0000 UTC m=+0.076747506 container create 39d91980faab4f2c08614f2498c8b7d3e269c70cbdf029e801c8e856ab325676 (image=quay.io/ceph/ceph:v19, name=flamboyant_margulis, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:46:53 np0005540825 podman[74509]: 2025-12-01 09:46:53.437837845 +0000 UTC m=+0.023300039 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:46:53 np0005540825 systemd[1]: Started libpod-conmon-39d91980faab4f2c08614f2498c8b7d3e269c70cbdf029e801c8e856ab325676.scope.
Dec  1 04:46:53 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:46:53 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c73c97b36b1b1003e4e4f9374f76309accd105cdc47211870deddf27f2e88a5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 04:46:53 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c73c97b36b1b1003e4e4f9374f76309accd105cdc47211870deddf27f2e88a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:46:53 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c73c97b36b1b1003e4e4f9374f76309accd105cdc47211870deddf27f2e88a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:46:53 np0005540825 podman[74509]: 2025-12-01 09:46:53.636172792 +0000 UTC m=+0.221634986 container init 39d91980faab4f2c08614f2498c8b7d3e269c70cbdf029e801c8e856ab325676 (image=quay.io/ceph/ceph:v19, name=flamboyant_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  1 04:46:53 np0005540825 podman[74509]: 2025-12-01 09:46:53.646928627 +0000 UTC m=+0.232390771 container start 39d91980faab4f2c08614f2498c8b7d3e269c70cbdf029e801c8e856ab325676 (image=quay.io/ceph/ceph:v19, name=flamboyant_margulis, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:46:53 np0005540825 podman[74509]: 2025-12-01 09:46:53.651657703 +0000 UTC m=+0.237119917 container attach 39d91980faab4f2c08614f2498c8b7d3e269c70cbdf029e801c8e856ab325676 (image=quay.io/ceph/ceph:v19, name=flamboyant_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:46:53 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Dec  1 04:46:53 np0005540825 systemd[1]: libpod-39d91980faab4f2c08614f2498c8b7d3e269c70cbdf029e801c8e856ab325676.scope: Deactivated successfully.
Dec  1 04:46:53 np0005540825 podman[74509]: 2025-12-01 09:46:53.893202786 +0000 UTC m=+0.478664900 container died 39d91980faab4f2c08614f2498c8b7d3e269c70cbdf029e801c8e856ab325676 (image=quay.io/ceph/ceph:v19, name=flamboyant_margulis, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:46:53 np0005540825 systemd[1]: var-lib-containers-storage-overlay-2c73c97b36b1b1003e4e4f9374f76309accd105cdc47211870deddf27f2e88a5-merged.mount: Deactivated successfully.
Dec  1 04:46:53 np0005540825 podman[74509]: 2025-12-01 09:46:53.940257474 +0000 UTC m=+0.525719598 container remove 39d91980faab4f2c08614f2498c8b7d3e269c70cbdf029e801c8e856ab325676 (image=quay.io/ceph/ceph:v19, name=flamboyant_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:46:53 np0005540825 systemd[1]: libpod-conmon-39d91980faab4f2c08614f2498c8b7d3e269c70cbdf029e801c8e856ab325676.scope: Deactivated successfully.
Dec  1 04:46:53 np0005540825 ceph-mon[74416]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  1 04:46:53 np0005540825 systemd[1]: Reloading.
Dec  1 04:46:54 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:46:54 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:46:54 np0005540825 systemd[1]: Reloading.
Dec  1 04:46:54 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:46:54 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:46:54 np0005540825 systemd[1]: Starting Ceph mgr.compute-0.fospow for 365f19c2-81e5-5edd-b6b4-280555214d3a...
Dec  1 04:46:54 np0005540825 podman[74689]: 2025-12-01 09:46:54.819645496 +0000 UTC m=+0.070510940 container create 47856f96919c8b587afcc93b7694f021080e9f89c3957592cab4c416bf3dbfaf (image=quay.io/ceph/ceph:v19, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  1 04:46:54 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76e40fa7d9e1d117ae5515e2fb4e46c3c65489b3102cdb2639294766c7a0706b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:46:54 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76e40fa7d9e1d117ae5515e2fb4e46c3c65489b3102cdb2639294766c7a0706b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:46:54 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76e40fa7d9e1d117ae5515e2fb4e46c3c65489b3102cdb2639294766c7a0706b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 04:46:54 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76e40fa7d9e1d117ae5515e2fb4e46c3c65489b3102cdb2639294766c7a0706b/merged/var/lib/ceph/mgr/ceph-compute-0.fospow supports timestamps until 2038 (0x7fffffff)
Dec  1 04:46:54 np0005540825 podman[74689]: 2025-12-01 09:46:54.787042182 +0000 UTC m=+0.037907716 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:46:54 np0005540825 podman[74689]: 2025-12-01 09:46:54.889478987 +0000 UTC m=+0.140344451 container init 47856f96919c8b587afcc93b7694f021080e9f89c3957592cab4c416bf3dbfaf (image=quay.io/ceph/ceph:v19, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  1 04:46:54 np0005540825 podman[74689]: 2025-12-01 09:46:54.900477219 +0000 UTC m=+0.151342673 container start 47856f96919c8b587afcc93b7694f021080e9f89c3957592cab4c416bf3dbfaf (image=quay.io/ceph/ceph:v19, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:46:54 np0005540825 bash[74689]: 47856f96919c8b587afcc93b7694f021080e9f89c3957592cab4c416bf3dbfaf
Dec  1 04:46:54 np0005540825 systemd[1]: Started Ceph mgr.compute-0.fospow for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 04:46:54 np0005540825 ceph-mgr[74709]: set uid:gid to 167:167 (ceph:ceph)
Dec  1 04:46:54 np0005540825 ceph-mgr[74709]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec  1 04:46:54 np0005540825 ceph-mgr[74709]: pidfile_write: ignore empty --pid-file
Dec  1 04:46:54 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'alerts'
Dec  1 04:46:55 np0005540825 podman[74710]: 2025-12-01 09:46:55.001732123 +0000 UTC m=+0.054499025 container create cfb2ba0b4d58576bb7c9b37c747d0a77fbc7dd04b3817958b4dd3f90f4f9a24c (image=quay.io/ceph/ceph:v19, name=blissful_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True)
Dec  1 04:46:55 np0005540825 systemd[1]: Started libpod-conmon-cfb2ba0b4d58576bb7c9b37c747d0a77fbc7dd04b3817958b4dd3f90f4f9a24c.scope.
Dec  1 04:46:55 np0005540825 podman[74710]: 2025-12-01 09:46:54.977573213 +0000 UTC m=+0.030340195 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:46:55 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:46:55 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45fa52cd9d4b59dad65c61a54fa0ddf4c74b9e24b601d572036fd55603904000/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:46:55 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45fa52cd9d4b59dad65c61a54fa0ddf4c74b9e24b601d572036fd55603904000/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 04:46:55 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45fa52cd9d4b59dad65c61a54fa0ddf4c74b9e24b601d572036fd55603904000/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:46:55 np0005540825 ceph-mgr[74709]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  1 04:46:55 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'balancer'
Dec  1 04:46:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:46:55.100+0000 7face6e8f140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  1 04:46:55 np0005540825 podman[74710]: 2025-12-01 09:46:55.118044577 +0000 UTC m=+0.170811479 container init cfb2ba0b4d58576bb7c9b37c747d0a77fbc7dd04b3817958b4dd3f90f4f9a24c (image=quay.io/ceph/ceph:v19, name=blissful_torvalds, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:46:55 np0005540825 podman[74710]: 2025-12-01 09:46:55.129650924 +0000 UTC m=+0.182417846 container start cfb2ba0b4d58576bb7c9b37c747d0a77fbc7dd04b3817958b4dd3f90f4f9a24c (image=quay.io/ceph/ceph:v19, name=blissful_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:46:55 np0005540825 podman[74710]: 2025-12-01 09:46:55.133476946 +0000 UTC m=+0.186243868 container attach cfb2ba0b4d58576bb7c9b37c747d0a77fbc7dd04b3817958b4dd3f90f4f9a24c (image=quay.io/ceph/ceph:v19, name=blissful_torvalds, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  1 04:46:55 np0005540825 ceph-mgr[74709]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  1 04:46:55 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'cephadm'
Dec  1 04:46:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:46:55.191+0000 7face6e8f140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  1 04:46:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec  1 04:46:55 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3466039884' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]: 
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]: {
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:    "fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:    "health": {
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:        "status": "HEALTH_OK",
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:        "checks": {},
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:        "mutes": []
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:    },
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:    "election_epoch": 5,
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:    "quorum": [
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:        0
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:    ],
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:    "quorum_names": [
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:        "compute-0"
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:    ],
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:    "quorum_age": 2,
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:    "monmap": {
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:        "epoch": 1,
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:        "min_mon_release_name": "squid",
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:        "num_mons": 1
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:    },
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:    "osdmap": {
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:        "epoch": 1,
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:        "num_osds": 0,
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:        "num_up_osds": 0,
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:        "osd_up_since": 0,
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:        "num_in_osds": 0,
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:        "osd_in_since": 0,
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:        "num_remapped_pgs": 0
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:    },
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:    "pgmap": {
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:        "pgs_by_state": [],
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:        "num_pgs": 0,
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:        "num_pools": 0,
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:        "num_objects": 0,
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:        "data_bytes": 0,
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:        "bytes_used": 0,
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:        "bytes_avail": 0,
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:        "bytes_total": 0
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:    },
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:    "fsmap": {
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:        "epoch": 1,
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:        "btime": "2025-12-01T09:46:50:475394+0000",
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:        "by_rank": [],
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:        "up:standby": 0
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:    },
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:    "mgrmap": {
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:        "available": false,
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:        "num_standbys": 0,
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:        "modules": [
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:            "iostat",
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:            "nfs",
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:            "restful"
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:        ],
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:        "services": {}
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:    },
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:    "servicemap": {
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:        "epoch": 1,
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:        "modified": "2025-12-01T09:46:50.478454+0000",
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:        "services": {}
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:    },
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]:    "progress_events": {}
Dec  1 04:46:55 np0005540825 blissful_torvalds[74746]: }
Dec  1 04:46:55 np0005540825 systemd[1]: libpod-cfb2ba0b4d58576bb7c9b37c747d0a77fbc7dd04b3817958b4dd3f90f4f9a24c.scope: Deactivated successfully.
Dec  1 04:46:55 np0005540825 podman[74710]: 2025-12-01 09:46:55.341075188 +0000 UTC m=+0.393842090 container died cfb2ba0b4d58576bb7c9b37c747d0a77fbc7dd04b3817958b4dd3f90f4f9a24c (image=quay.io/ceph/ceph:v19, name=blissful_torvalds, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  1 04:46:55 np0005540825 systemd[1]: var-lib-containers-storage-overlay-45fa52cd9d4b59dad65c61a54fa0ddf4c74b9e24b601d572036fd55603904000-merged.mount: Deactivated successfully.
Dec  1 04:46:55 np0005540825 podman[74710]: 2025-12-01 09:46:55.381750067 +0000 UTC m=+0.434516949 container remove cfb2ba0b4d58576bb7c9b37c747d0a77fbc7dd04b3817958b4dd3f90f4f9a24c (image=quay.io/ceph/ceph:v19, name=blissful_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  1 04:46:55 np0005540825 systemd[1]: libpod-conmon-cfb2ba0b4d58576bb7c9b37c747d0a77fbc7dd04b3817958b4dd3f90f4f9a24c.scope: Deactivated successfully.
Dec  1 04:46:55 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'crash'
Dec  1 04:46:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:46:56.006+0000 7face6e8f140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  1 04:46:56 np0005540825 ceph-mgr[74709]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  1 04:46:56 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'dashboard'
Dec  1 04:46:56 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'devicehealth'
Dec  1 04:46:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:46:56.640+0000 7face6e8f140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  1 04:46:56 np0005540825 ceph-mgr[74709]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  1 04:46:56 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'diskprediction_local'
Dec  1 04:46:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  1 04:46:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  1 04:46:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]:  from numpy import show_config as show_numpy_config
Dec  1 04:46:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:46:56.800+0000 7face6e8f140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  1 04:46:56 np0005540825 ceph-mgr[74709]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  1 04:46:56 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'influx'
Dec  1 04:46:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:46:56.866+0000 7face6e8f140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  1 04:46:56 np0005540825 ceph-mgr[74709]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  1 04:46:56 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'insights'
Dec  1 04:46:56 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'iostat'
Dec  1 04:46:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:46:56.997+0000 7face6e8f140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  1 04:46:56 np0005540825 ceph-mgr[74709]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  1 04:46:56 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'k8sevents'
Dec  1 04:46:57 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'localpool'
Dec  1 04:46:57 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'mds_autoscaler'
Dec  1 04:46:57 np0005540825 podman[74796]: 2025-12-01 09:46:57.464749887 +0000 UTC m=+0.053091648 container create 42b7529e48c529a1fe24a634605079d9d3c6d42b9e70f925628bece7d2190a47 (image=quay.io/ceph/ceph:v19, name=magical_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Dec  1 04:46:57 np0005540825 systemd[1]: Started libpod-conmon-42b7529e48c529a1fe24a634605079d9d3c6d42b9e70f925628bece7d2190a47.scope.
Dec  1 04:46:57 np0005540825 podman[74796]: 2025-12-01 09:46:57.438417199 +0000 UTC m=+0.026759020 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:46:57 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:46:57 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae60eef91ed8564f74ac65a5821f50ebf7726acb924c0b396e905703e2a503de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:46:57 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae60eef91ed8564f74ac65a5821f50ebf7726acb924c0b396e905703e2a503de/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 04:46:57 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae60eef91ed8564f74ac65a5821f50ebf7726acb924c0b396e905703e2a503de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:46:57 np0005540825 podman[74796]: 2025-12-01 09:46:57.568450496 +0000 UTC m=+0.156792317 container init 42b7529e48c529a1fe24a634605079d9d3c6d42b9e70f925628bece7d2190a47 (image=quay.io/ceph/ceph:v19, name=magical_galileo, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:46:57 np0005540825 podman[74796]: 2025-12-01 09:46:57.5765041 +0000 UTC m=+0.164845861 container start 42b7529e48c529a1fe24a634605079d9d3c6d42b9e70f925628bece7d2190a47 (image=quay.io/ceph/ceph:v19, name=magical_galileo, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:46:57 np0005540825 podman[74796]: 2025-12-01 09:46:57.58028015 +0000 UTC m=+0.168621901 container attach 42b7529e48c529a1fe24a634605079d9d3c6d42b9e70f925628bece7d2190a47 (image=quay.io/ceph/ceph:v19, name=magical_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  1 04:46:57 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'mirroring'
Dec  1 04:46:57 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'nfs'
Dec  1 04:46:57 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec  1 04:46:57 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1076890452' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  1 04:46:57 np0005540825 magical_galileo[74812]: 
Dec  1 04:46:57 np0005540825 magical_galileo[74812]: {
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:    "fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:    "health": {
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:        "status": "HEALTH_OK",
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:        "checks": {},
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:        "mutes": []
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:    },
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:    "election_epoch": 5,
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:    "quorum": [
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:        0
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:    ],
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:    "quorum_names": [
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:        "compute-0"
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:    ],
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:    "quorum_age": 4,
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:    "monmap": {
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:        "epoch": 1,
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:        "min_mon_release_name": "squid",
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:        "num_mons": 1
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:    },
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:    "osdmap": {
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:        "epoch": 1,
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:        "num_osds": 0,
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:        "num_up_osds": 0,
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:        "osd_up_since": 0,
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:        "num_in_osds": 0,
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:        "osd_in_since": 0,
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:        "num_remapped_pgs": 0
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:    },
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:    "pgmap": {
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:        "pgs_by_state": [],
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:        "num_pgs": 0,
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:        "num_pools": 0,
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:        "num_objects": 0,
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:        "data_bytes": 0,
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:        "bytes_used": 0,
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:        "bytes_avail": 0,
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:        "bytes_total": 0
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:    },
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:    "fsmap": {
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:        "epoch": 1,
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:        "btime": "2025-12-01T09:46:50:475394+0000",
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:        "by_rank": [],
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:        "up:standby": 0
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:    },
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:    "mgrmap": {
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:        "available": false,
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:        "num_standbys": 0,
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:        "modules": [
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:            "iostat",
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:            "nfs",
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:            "restful"
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:        ],
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:        "services": {}
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:    },
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:    "servicemap": {
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:        "epoch": 1,
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:        "modified": "2025-12-01T09:46:50.478454+0000",
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:        "services": {}
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:    },
Dec  1 04:46:57 np0005540825 magical_galileo[74812]:    "progress_events": {}
Dec  1 04:46:57 np0005540825 magical_galileo[74812]: }
Dec  1 04:46:57 np0005540825 systemd[1]: libpod-42b7529e48c529a1fe24a634605079d9d3c6d42b9e70f925628bece7d2190a47.scope: Deactivated successfully.
Dec  1 04:46:57 np0005540825 podman[74796]: 2025-12-01 09:46:57.797584541 +0000 UTC m=+0.385926262 container died 42b7529e48c529a1fe24a634605079d9d3c6d42b9e70f925628bece7d2190a47 (image=quay.io/ceph/ceph:v19, name=magical_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:46:57 np0005540825 systemd[1]: var-lib-containers-storage-overlay-ae60eef91ed8564f74ac65a5821f50ebf7726acb924c0b396e905703e2a503de-merged.mount: Deactivated successfully.
Dec  1 04:46:57 np0005540825 podman[74796]: 2025-12-01 09:46:57.833550434 +0000 UTC m=+0.421892155 container remove 42b7529e48c529a1fe24a634605079d9d3c6d42b9e70f925628bece7d2190a47 (image=quay.io/ceph/ceph:v19, name=magical_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 04:46:57 np0005540825 systemd[1]: libpod-conmon-42b7529e48c529a1fe24a634605079d9d3c6d42b9e70f925628bece7d2190a47.scope: Deactivated successfully.
Dec  1 04:46:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:46:58.012+0000 7face6e8f140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  1 04:46:58 np0005540825 ceph-mgr[74709]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  1 04:46:58 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'orchestrator'
Dec  1 04:46:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:46:58.236+0000 7face6e8f140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  1 04:46:58 np0005540825 ceph-mgr[74709]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  1 04:46:58 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'osd_perf_query'
Dec  1 04:46:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:46:58.313+0000 7face6e8f140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  1 04:46:58 np0005540825 ceph-mgr[74709]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  1 04:46:58 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'osd_support'
Dec  1 04:46:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:46:58.383+0000 7face6e8f140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  1 04:46:58 np0005540825 ceph-mgr[74709]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  1 04:46:58 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'pg_autoscaler'
Dec  1 04:46:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:46:58.464+0000 7face6e8f140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  1 04:46:58 np0005540825 ceph-mgr[74709]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  1 04:46:58 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'progress'
Dec  1 04:46:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:46:58.532+0000 7face6e8f140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  1 04:46:58 np0005540825 ceph-mgr[74709]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  1 04:46:58 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'prometheus'
Dec  1 04:46:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:46:58.864+0000 7face6e8f140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  1 04:46:58 np0005540825 ceph-mgr[74709]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  1 04:46:58 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'rbd_support'
Dec  1 04:46:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:46:58.967+0000 7face6e8f140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  1 04:46:58 np0005540825 ceph-mgr[74709]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  1 04:46:58 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'restful'
Dec  1 04:46:59 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'rgw'
Dec  1 04:46:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:46:59.388+0000 7face6e8f140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  1 04:46:59 np0005540825 ceph-mgr[74709]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  1 04:46:59 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'rook'
Dec  1 04:46:59 np0005540825 podman[74851]: 2025-12-01 09:46:59.924246768 +0000 UTC m=+0.058348358 container create afd9c5651fdb98980342144c34dcfdfcb6235ca2cd642c1bc63c77d9e8c46392 (image=quay.io/ceph/ceph:v19, name=romantic_sinoussi, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  1 04:46:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:46:59.954+0000 7face6e8f140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  1 04:46:59 np0005540825 ceph-mgr[74709]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  1 04:46:59 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'selftest'
Dec  1 04:46:59 np0005540825 systemd[1]: Started libpod-conmon-afd9c5651fdb98980342144c34dcfdfcb6235ca2cd642c1bc63c77d9e8c46392.scope.
Dec  1 04:46:59 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:46:59 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20e84a205d15828d73e1b36683a51a6f31127cd8e4b0e4d8362ba5bffbde17b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:46:59 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20e84a205d15828d73e1b36683a51a6f31127cd8e4b0e4d8362ba5bffbde17b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:46:59 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20e84a205d15828d73e1b36683a51a6f31127cd8e4b0e4d8362ba5bffbde17b8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:00 np0005540825 podman[74851]: 2025-12-01 09:46:59.902372638 +0000 UTC m=+0.036474258 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:47:00 np0005540825 podman[74851]: 2025-12-01 09:47:00.007937266 +0000 UTC m=+0.142038876 container init afd9c5651fdb98980342144c34dcfdfcb6235ca2cd642c1bc63c77d9e8c46392 (image=quay.io/ceph/ceph:v19, name=romantic_sinoussi, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  1 04:47:00 np0005540825 podman[74851]: 2025-12-01 09:47:00.019881773 +0000 UTC m=+0.153983383 container start afd9c5651fdb98980342144c34dcfdfcb6235ca2cd642c1bc63c77d9e8c46392 (image=quay.io/ceph/ceph:v19, name=romantic_sinoussi, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:47:00 np0005540825 podman[74851]: 2025-12-01 09:47:00.02353287 +0000 UTC m=+0.157634460 container attach afd9c5651fdb98980342144c34dcfdfcb6235ca2cd642c1bc63c77d9e8c46392 (image=quay.io/ceph/ceph:v19, name=romantic_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Dec  1 04:47:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:47:00.026+0000 7face6e8f140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  1 04:47:00 np0005540825 ceph-mgr[74709]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  1 04:47:00 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'snap_schedule'
Dec  1 04:47:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:47:00.109+0000 7face6e8f140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  1 04:47:00 np0005540825 ceph-mgr[74709]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  1 04:47:00 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'stats'
Dec  1 04:47:00 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'status'
Dec  1 04:47:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec  1 04:47:00 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1024250995' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]: 
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]: {
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:    "fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:    "health": {
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:        "status": "HEALTH_OK",
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:        "checks": {},
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:        "mutes": []
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:    },
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:    "election_epoch": 5,
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:    "quorum": [
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:        0
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:    ],
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:    "quorum_names": [
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:        "compute-0"
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:    ],
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:    "quorum_age": 7,
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:    "monmap": {
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:        "epoch": 1,
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:        "min_mon_release_name": "squid",
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:        "num_mons": 1
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:    },
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:    "osdmap": {
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:        "epoch": 1,
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:        "num_osds": 0,
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:        "num_up_osds": 0,
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:        "osd_up_since": 0,
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:        "num_in_osds": 0,
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:        "osd_in_since": 0,
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:        "num_remapped_pgs": 0
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:    },
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:    "pgmap": {
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:        "pgs_by_state": [],
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:        "num_pgs": 0,
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:        "num_pools": 0,
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:        "num_objects": 0,
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:        "data_bytes": 0,
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:        "bytes_used": 0,
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:        "bytes_avail": 0,
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:        "bytes_total": 0
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:    },
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:    "fsmap": {
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:        "epoch": 1,
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:        "btime": "2025-12-01T09:46:50:475394+0000",
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:        "by_rank": [],
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:        "up:standby": 0
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:    },
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:    "mgrmap": {
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:        "available": false,
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:        "num_standbys": 0,
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:        "modules": [
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:            "iostat",
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:            "nfs",
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:            "restful"
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:        ],
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:        "services": {}
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:    },
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:    "servicemap": {
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:        "epoch": 1,
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:        "modified": "2025-12-01T09:46:50.478454+0000",
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:        "services": {}
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:    },
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]:    "progress_events": {}
Dec  1 04:47:00 np0005540825 romantic_sinoussi[74867]: }
Dec  1 04:47:00 np0005540825 systemd[1]: libpod-afd9c5651fdb98980342144c34dcfdfcb6235ca2cd642c1bc63c77d9e8c46392.scope: Deactivated successfully.
Dec  1 04:47:00 np0005540825 podman[74851]: 2025-12-01 09:47:00.239012792 +0000 UTC m=+0.373114382 container died afd9c5651fdb98980342144c34dcfdfcb6235ca2cd642c1bc63c77d9e8c46392 (image=quay.io/ceph/ceph:v19, name=romantic_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:47:00 np0005540825 systemd[1]: var-lib-containers-storage-overlay-20e84a205d15828d73e1b36683a51a6f31127cd8e4b0e4d8362ba5bffbde17b8-merged.mount: Deactivated successfully.
Dec  1 04:47:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:47:00.261+0000 7face6e8f140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec  1 04:47:00 np0005540825 ceph-mgr[74709]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec  1 04:47:00 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'telegraf'
Dec  1 04:47:00 np0005540825 podman[74851]: 2025-12-01 09:47:00.273805054 +0000 UTC m=+0.407906654 container remove afd9c5651fdb98980342144c34dcfdfcb6235ca2cd642c1bc63c77d9e8c46392 (image=quay.io/ceph/ceph:v19, name=romantic_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec  1 04:47:00 np0005540825 systemd[1]: libpod-conmon-afd9c5651fdb98980342144c34dcfdfcb6235ca2cd642c1bc63c77d9e8c46392.scope: Deactivated successfully.
Dec  1 04:47:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:47:00.329+0000 7face6e8f140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  1 04:47:00 np0005540825 ceph-mgr[74709]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  1 04:47:00 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'telemetry'
Dec  1 04:47:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:47:00.483+0000 7face6e8f140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  1 04:47:00 np0005540825 ceph-mgr[74709]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  1 04:47:00 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'test_orchestrator'
Dec  1 04:47:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:47:00.709+0000 7face6e8f140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  1 04:47:00 np0005540825 ceph-mgr[74709]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  1 04:47:00 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'volumes'
Dec  1 04:47:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:47:00.969+0000 7face6e8f140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  1 04:47:00 np0005540825 ceph-mgr[74709]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  1 04:47:00 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'zabbix'
Dec  1 04:47:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:47:01.040+0000 7face6e8f140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: ms_deliver_dispatch: unhandled message 0x561f236089c0 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec  1 04:47:01 np0005540825 ceph-mon[74416]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.fospow
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: mgr handle_mgr_map Activating!
Dec  1 04:47:01 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.fospow(active, starting, since 0.0930012s)
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: mgr handle_mgr_map I am now activating
Dec  1 04:47:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec  1 04:47:01 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3078612886' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec  1 04:47:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).mds e1 all = 1
Dec  1 04:47:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec  1 04:47:01 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3078612886' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec  1 04:47:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec  1 04:47:01 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3078612886' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec  1 04:47:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  1 04:47:01 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3078612886' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  1 04:47:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.fospow", "id": "compute-0.fospow"} v 0)
Dec  1 04:47:01 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3078612886' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mgr metadata", "who": "compute-0.fospow", "id": "compute-0.fospow"}]: dispatch
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: balancer
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: [balancer INFO root] Starting
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:47:01 np0005540825 ceph-mon[74416]: log_channel(cluster) log [INF] : Manager daemon compute-0.fospow is now available
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: crash
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_09:47:01
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: [balancer INFO root] No pools available
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: devicehealth
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: [devicehealth INFO root] Starting
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: iostat
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: nfs
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: orchestrator
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: pg_autoscaler
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: progress
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: [progress INFO root] Loading...
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: [progress INFO root] No stored events to load
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: [progress INFO root] Loaded [] historic events
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: [progress INFO root] Loaded OSDMap, ready.
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] recovery thread starting
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] starting setup
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: rbd_support
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: restful
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: status
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: [restful INFO root] server_addr: :: server_port: 8003
Dec  1 04:47:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fospow/mirror_snapshot_schedule"} v 0)
Dec  1 04:47:01 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3078612886' entity='mgr.compute-0.fospow' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fospow/mirror_snapshot_schedule"}]: dispatch
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: telemetry
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: [restful WARNING root] server not running: no certificate configured
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] PerfHandler: starting
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TaskHandler: starting
Dec  1 04:47:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Dec  1 04:47:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fospow/trash_purge_schedule"} v 0)
Dec  1 04:47:01 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3078612886' entity='mgr.compute-0.fospow' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fospow/trash_purge_schedule"}]: dispatch
Dec  1 04:47:01 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3078612886' entity='mgr.compute-0.fospow' 
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 04:47:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] setup complete
Dec  1 04:47:01 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3078612886' entity='mgr.compute-0.fospow' 
Dec  1 04:47:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Dec  1 04:47:01 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3078612886' entity='mgr.compute-0.fospow' 
Dec  1 04:47:01 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: volumes
Dec  1 04:47:01 np0005540825 ceph-mon[74416]: Activating manager daemon compute-0.fospow
Dec  1 04:47:01 np0005540825 ceph-mon[74416]: Manager daemon compute-0.fospow is now available
Dec  1 04:47:01 np0005540825 ceph-mon[74416]: from='mgr.14102 192.168.122.100:0/3078612886' entity='mgr.compute-0.fospow' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fospow/mirror_snapshot_schedule"}]: dispatch
Dec  1 04:47:01 np0005540825 ceph-mon[74416]: from='mgr.14102 192.168.122.100:0/3078612886' entity='mgr.compute-0.fospow' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fospow/trash_purge_schedule"}]: dispatch
Dec  1 04:47:01 np0005540825 ceph-mon[74416]: from='mgr.14102 192.168.122.100:0/3078612886' entity='mgr.compute-0.fospow' 
Dec  1 04:47:01 np0005540825 ceph-mon[74416]: from='mgr.14102 192.168.122.100:0/3078612886' entity='mgr.compute-0.fospow' 
Dec  1 04:47:01 np0005540825 ceph-mon[74416]: from='mgr.14102 192.168.122.100:0/3078612886' entity='mgr.compute-0.fospow' 
Dec  1 04:47:02 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.fospow(active, since 1.10846s)
Dec  1 04:47:02 np0005540825 podman[74986]: 2025-12-01 09:47:02.379805544 +0000 UTC m=+0.073179831 container create fadb1f6f8d7153b2a1ddc1cd5f7dbf991a55a3a54369f29eec0947a59dc942c3 (image=quay.io/ceph/ceph:v19, name=focused_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  1 04:47:02 np0005540825 systemd[1]: Started libpod-conmon-fadb1f6f8d7153b2a1ddc1cd5f7dbf991a55a3a54369f29eec0947a59dc942c3.scope.
Dec  1 04:47:02 np0005540825 podman[74986]: 2025-12-01 09:47:02.351478893 +0000 UTC m=+0.044853260 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:47:02 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:47:02 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a3206a1072276e68f1e6f7b39cd8e6b67f84552f9e1f84dc2f87ce483d00244/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:02 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a3206a1072276e68f1e6f7b39cd8e6b67f84552f9e1f84dc2f87ce483d00244/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:02 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a3206a1072276e68f1e6f7b39cd8e6b67f84552f9e1f84dc2f87ce483d00244/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:02 np0005540825 podman[74986]: 2025-12-01 09:47:02.479645501 +0000 UTC m=+0.173019858 container init fadb1f6f8d7153b2a1ddc1cd5f7dbf991a55a3a54369f29eec0947a59dc942c3 (image=quay.io/ceph/ceph:v19, name=focused_hofstadter, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  1 04:47:02 np0005540825 podman[74986]: 2025-12-01 09:47:02.489231795 +0000 UTC m=+0.182606102 container start fadb1f6f8d7153b2a1ddc1cd5f7dbf991a55a3a54369f29eec0947a59dc942c3 (image=quay.io/ceph/ceph:v19, name=focused_hofstadter, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:47:02 np0005540825 podman[74986]: 2025-12-01 09:47:02.493200049 +0000 UTC m=+0.186574426 container attach fadb1f6f8d7153b2a1ddc1cd5f7dbf991a55a3a54369f29eec0947a59dc942c3 (image=quay.io/ceph/ceph:v19, name=focused_hofstadter, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:47:02 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec  1 04:47:02 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3278816778' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]: 
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]: {
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:    "fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:    "health": {
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:        "status": "HEALTH_OK",
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:        "checks": {},
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:        "mutes": []
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:    },
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:    "election_epoch": 5,
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:    "quorum": [
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:        0
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:    ],
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:    "quorum_names": [
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:        "compute-0"
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:    ],
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:    "quorum_age": 9,
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:    "monmap": {
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:        "epoch": 1,
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:        "min_mon_release_name": "squid",
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:        "num_mons": 1
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:    },
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:    "osdmap": {
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:        "epoch": 1,
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:        "num_osds": 0,
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:        "num_up_osds": 0,
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:        "osd_up_since": 0,
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:        "num_in_osds": 0,
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:        "osd_in_since": 0,
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:        "num_remapped_pgs": 0
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:    },
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:    "pgmap": {
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:        "pgs_by_state": [],
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:        "num_pgs": 0,
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:        "num_pools": 0,
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:        "num_objects": 0,
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:        "data_bytes": 0,
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:        "bytes_used": 0,
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:        "bytes_avail": 0,
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:        "bytes_total": 0
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:    },
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:    "fsmap": {
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:        "epoch": 1,
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:        "btime": "2025-12-01T09:46:50:475394+0000",
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:        "by_rank": [],
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:        "up:standby": 0
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:    },
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:    "mgrmap": {
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:        "available": true,
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:        "num_standbys": 0,
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:        "modules": [
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:            "iostat",
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:            "nfs",
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:            "restful"
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:        ],
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:        "services": {}
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:    },
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:    "servicemap": {
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:        "epoch": 1,
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:        "modified": "2025-12-01T09:46:50.478454+0000",
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:        "services": {}
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:    },
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]:    "progress_events": {}
Dec  1 04:47:02 np0005540825 focused_hofstadter[75003]: }
Dec  1 04:47:02 np0005540825 systemd[1]: libpod-fadb1f6f8d7153b2a1ddc1cd5f7dbf991a55a3a54369f29eec0947a59dc942c3.scope: Deactivated successfully.
Dec  1 04:47:02 np0005540825 podman[74986]: 2025-12-01 09:47:02.954447907 +0000 UTC m=+0.647822174 container died fadb1f6f8d7153b2a1ddc1cd5f7dbf991a55a3a54369f29eec0947a59dc942c3 (image=quay.io/ceph/ceph:v19, name=focused_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  1 04:47:02 np0005540825 systemd[1]: var-lib-containers-storage-overlay-2a3206a1072276e68f1e6f7b39cd8e6b67f84552f9e1f84dc2f87ce483d00244-merged.mount: Deactivated successfully.
Dec  1 04:47:02 np0005540825 podman[74986]: 2025-12-01 09:47:02.991451498 +0000 UTC m=+0.684825765 container remove fadb1f6f8d7153b2a1ddc1cd5f7dbf991a55a3a54369f29eec0947a59dc942c3 (image=quay.io/ceph/ceph:v19, name=focused_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  1 04:47:03 np0005540825 systemd[1]: libpod-conmon-fadb1f6f8d7153b2a1ddc1cd5f7dbf991a55a3a54369f29eec0947a59dc942c3.scope: Deactivated successfully.
Dec  1 04:47:03 np0005540825 podman[75040]: 2025-12-01 09:47:03.091795868 +0000 UTC m=+0.070244653 container create a402d23e56710fb27efee1de7a6951e5ef0c676caa9725340e2ccbc54dc06392 (image=quay.io/ceph/ceph:v19, name=gifted_borg, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:47:03 np0005540825 systemd[1]: Started libpod-conmon-a402d23e56710fb27efee1de7a6951e5ef0c676caa9725340e2ccbc54dc06392.scope.
Dec  1 04:47:03 np0005540825 ceph-mgr[74709]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  1 04:47:03 np0005540825 podman[75040]: 2025-12-01 09:47:03.0628187 +0000 UTC m=+0.041267585 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:47:03 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:47:03 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.fospow(active, since 2s)
Dec  1 04:47:03 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f537d826f3a70b2ec9d8687c700908d481148279a567f71fa02e4f88d2f82a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:03 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f537d826f3a70b2ec9d8687c700908d481148279a567f71fa02e4f88d2f82a0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:03 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f537d826f3a70b2ec9d8687c700908d481148279a567f71fa02e4f88d2f82a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:03 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f537d826f3a70b2ec9d8687c700908d481148279a567f71fa02e4f88d2f82a0/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:03 np0005540825 podman[75040]: 2025-12-01 09:47:03.182526813 +0000 UTC m=+0.160975688 container init a402d23e56710fb27efee1de7a6951e5ef0c676caa9725340e2ccbc54dc06392 (image=quay.io/ceph/ceph:v19, name=gifted_borg, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  1 04:47:03 np0005540825 podman[75040]: 2025-12-01 09:47:03.193571836 +0000 UTC m=+0.172020671 container start a402d23e56710fb27efee1de7a6951e5ef0c676caa9725340e2ccbc54dc06392 (image=quay.io/ceph/ceph:v19, name=gifted_borg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  1 04:47:03 np0005540825 podman[75040]: 2025-12-01 09:47:03.197574822 +0000 UTC m=+0.176023647 container attach a402d23e56710fb27efee1de7a6951e5ef0c676caa9725340e2ccbc54dc06392 (image=quay.io/ceph/ceph:v19, name=gifted_borg, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:47:03 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec  1 04:47:03 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1875437703' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  1 04:47:03 np0005540825 gifted_borg[75057]: 
Dec  1 04:47:03 np0005540825 gifted_borg[75057]: [global]
Dec  1 04:47:03 np0005540825 gifted_borg[75057]: #011fsid = 365f19c2-81e5-5edd-b6b4-280555214d3a
Dec  1 04:47:03 np0005540825 gifted_borg[75057]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec  1 04:47:03 np0005540825 systemd[1]: libpod-a402d23e56710fb27efee1de7a6951e5ef0c676caa9725340e2ccbc54dc06392.scope: Deactivated successfully.
Dec  1 04:47:03 np0005540825 podman[75040]: 2025-12-01 09:47:03.557520964 +0000 UTC m=+0.535969759 container died a402d23e56710fb27efee1de7a6951e5ef0c676caa9725340e2ccbc54dc06392 (image=quay.io/ceph/ceph:v19, name=gifted_borg, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  1 04:47:03 np0005540825 systemd[1]: var-lib-containers-storage-overlay-8f537d826f3a70b2ec9d8687c700908d481148279a567f71fa02e4f88d2f82a0-merged.mount: Deactivated successfully.
Dec  1 04:47:03 np0005540825 podman[75040]: 2025-12-01 09:47:03.592747498 +0000 UTC m=+0.571196283 container remove a402d23e56710fb27efee1de7a6951e5ef0c676caa9725340e2ccbc54dc06392 (image=quay.io/ceph/ceph:v19, name=gifted_borg, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:47:03 np0005540825 systemd[1]: libpod-conmon-a402d23e56710fb27efee1de7a6951e5ef0c676caa9725340e2ccbc54dc06392.scope: Deactivated successfully.
Dec  1 04:47:03 np0005540825 podman[75094]: 2025-12-01 09:47:03.652891233 +0000 UTC m=+0.041667776 container create e2f4bd1a293eb147de988646a214b1a27f85d0a535e178209138b27b69612080 (image=quay.io/ceph/ceph:v19, name=relaxed_maxwell, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:47:03 np0005540825 systemd[1]: Started libpod-conmon-e2f4bd1a293eb147de988646a214b1a27f85d0a535e178209138b27b69612080.scope.
Dec  1 04:47:03 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:47:03 np0005540825 podman[75094]: 2025-12-01 09:47:03.633221931 +0000 UTC m=+0.021998454 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:47:03 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1239025c33f5d89b38a7b2a2b5e42cf5786d1e769a52811a60c2e4a542d64e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:03 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1239025c33f5d89b38a7b2a2b5e42cf5786d1e769a52811a60c2e4a542d64e3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:03 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1239025c33f5d89b38a7b2a2b5e42cf5786d1e769a52811a60c2e4a542d64e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:03 np0005540825 podman[75094]: 2025-12-01 09:47:03.747506971 +0000 UTC m=+0.136283554 container init e2f4bd1a293eb147de988646a214b1a27f85d0a535e178209138b27b69612080 (image=quay.io/ceph/ceph:v19, name=relaxed_maxwell, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:47:03 np0005540825 podman[75094]: 2025-12-01 09:47:03.756595672 +0000 UTC m=+0.145372185 container start e2f4bd1a293eb147de988646a214b1a27f85d0a535e178209138b27b69612080 (image=quay.io/ceph/ceph:v19, name=relaxed_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  1 04:47:03 np0005540825 podman[75094]: 2025-12-01 09:47:03.760440904 +0000 UTC m=+0.149217437 container attach e2f4bd1a293eb147de988646a214b1a27f85d0a535e178209138b27b69612080 (image=quay.io/ceph/ceph:v19, name=relaxed_maxwell, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  1 04:47:04 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/1875437703' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  1 04:47:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Dec  1 04:47:04 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3579193755' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Dec  1 04:47:05 np0005540825 ceph-mgr[74709]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  1 04:47:05 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/3579193755' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Dec  1 04:47:05 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3579193755' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec  1 04:47:05 np0005540825 ceph-mgr[74709]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec  1 04:47:05 np0005540825 ceph-mgr[74709]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec  1 04:47:05 np0005540825 ceph-mgr[74709]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec  1 04:47:05 np0005540825 ceph-mgr[74709]: mgr respawn  1: '-n'
Dec  1 04:47:05 np0005540825 ceph-mgr[74709]: mgr respawn  2: 'mgr.compute-0.fospow'
Dec  1 04:47:05 np0005540825 ceph-mgr[74709]: mgr respawn  3: '-f'
Dec  1 04:47:05 np0005540825 ceph-mgr[74709]: mgr respawn  4: '--setuser'
Dec  1 04:47:05 np0005540825 ceph-mgr[74709]: mgr respawn  5: 'ceph'
Dec  1 04:47:05 np0005540825 ceph-mgr[74709]: mgr respawn  6: '--setgroup'
Dec  1 04:47:05 np0005540825 ceph-mgr[74709]: mgr respawn  7: 'ceph'
Dec  1 04:47:05 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.fospow(active, since 4s)
Dec  1 04:47:05 np0005540825 systemd[1]: libpod-e2f4bd1a293eb147de988646a214b1a27f85d0a535e178209138b27b69612080.scope: Deactivated successfully.
Dec  1 04:47:05 np0005540825 podman[75094]: 2025-12-01 09:47:05.218791355 +0000 UTC m=+1.607567908 container died e2f4bd1a293eb147de988646a214b1a27f85d0a535e178209138b27b69612080 (image=quay.io/ceph/ceph:v19, name=relaxed_maxwell, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:47:05 np0005540825 systemd[1]: var-lib-containers-storage-overlay-f1239025c33f5d89b38a7b2a2b5e42cf5786d1e769a52811a60c2e4a542d64e3-merged.mount: Deactivated successfully.
Dec  1 04:47:05 np0005540825 podman[75094]: 2025-12-01 09:47:05.268845572 +0000 UTC m=+1.657622145 container remove e2f4bd1a293eb147de988646a214b1a27f85d0a535e178209138b27b69612080 (image=quay.io/ceph/ceph:v19, name=relaxed_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:47:05 np0005540825 systemd[1]: libpod-conmon-e2f4bd1a293eb147de988646a214b1a27f85d0a535e178209138b27b69612080.scope: Deactivated successfully.
Dec  1 04:47:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ignoring --setuser ceph since I am not root
Dec  1 04:47:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ignoring --setgroup ceph since I am not root
Dec  1 04:47:05 np0005540825 ceph-mgr[74709]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec  1 04:47:05 np0005540825 ceph-mgr[74709]: pidfile_write: ignore empty --pid-file
Dec  1 04:47:05 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'alerts'
Dec  1 04:47:05 np0005540825 podman[75148]: 2025-12-01 09:47:05.377918903 +0000 UTC m=+0.067110130 container create 413751d501da780d2a3a1e2fcfea631b5eaaf02b9f698f0dfb99b7e002eab6f8 (image=quay.io/ceph/ceph:v19, name=elated_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec  1 04:47:05 np0005540825 systemd[1]: Started libpod-conmon-413751d501da780d2a3a1e2fcfea631b5eaaf02b9f698f0dfb99b7e002eab6f8.scope.
Dec  1 04:47:05 np0005540825 podman[75148]: 2025-12-01 09:47:05.351503363 +0000 UTC m=+0.040694640 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:47:05 np0005540825 ceph-mgr[74709]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  1 04:47:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:47:05.460+0000 7f98d54aa140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  1 04:47:05 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'balancer'
Dec  1 04:47:05 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:47:05 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/435890c8c08afea3f4ab23482e3aad42677b08b37f482a873c7f21a4fb6c2149/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:05 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/435890c8c08afea3f4ab23482e3aad42677b08b37f482a873c7f21a4fb6c2149/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:05 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/435890c8c08afea3f4ab23482e3aad42677b08b37f482a873c7f21a4fb6c2149/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:05 np0005540825 podman[75148]: 2025-12-01 09:47:05.507653543 +0000 UTC m=+0.196844770 container init 413751d501da780d2a3a1e2fcfea631b5eaaf02b9f698f0dfb99b7e002eab6f8 (image=quay.io/ceph/ceph:v19, name=elated_mclean, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  1 04:47:05 np0005540825 podman[75148]: 2025-12-01 09:47:05.517324509 +0000 UTC m=+0.206515726 container start 413751d501da780d2a3a1e2fcfea631b5eaaf02b9f698f0dfb99b7e002eab6f8 (image=quay.io/ceph/ceph:v19, name=elated_mclean, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:47:05 np0005540825 podman[75148]: 2025-12-01 09:47:05.520613176 +0000 UTC m=+0.209804383 container attach 413751d501da780d2a3a1e2fcfea631b5eaaf02b9f698f0dfb99b7e002eab6f8 (image=quay.io/ceph/ceph:v19, name=elated_mclean, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  1 04:47:05 np0005540825 ceph-mgr[74709]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  1 04:47:05 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'cephadm'
Dec  1 04:47:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:47:05.536+0000 7f98d54aa140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  1 04:47:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec  1 04:47:05 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/881284505' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec  1 04:47:05 np0005540825 elated_mclean[75184]: {
Dec  1 04:47:05 np0005540825 elated_mclean[75184]:    "epoch": 5,
Dec  1 04:47:05 np0005540825 elated_mclean[75184]:    "available": true,
Dec  1 04:47:05 np0005540825 elated_mclean[75184]:    "active_name": "compute-0.fospow",
Dec  1 04:47:05 np0005540825 elated_mclean[75184]:    "num_standby": 0
Dec  1 04:47:05 np0005540825 elated_mclean[75184]: }
Dec  1 04:47:05 np0005540825 systemd[1]: libpod-413751d501da780d2a3a1e2fcfea631b5eaaf02b9f698f0dfb99b7e002eab6f8.scope: Deactivated successfully.
Dec  1 04:47:05 np0005540825 podman[75148]: 2025-12-01 09:47:05.934854908 +0000 UTC m=+0.624046125 container died 413751d501da780d2a3a1e2fcfea631b5eaaf02b9f698f0dfb99b7e002eab6f8 (image=quay.io/ceph/ceph:v19, name=elated_mclean, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:47:05 np0005540825 systemd[1]: var-lib-containers-storage-overlay-435890c8c08afea3f4ab23482e3aad42677b08b37f482a873c7f21a4fb6c2149-merged.mount: Deactivated successfully.
Dec  1 04:47:05 np0005540825 podman[75148]: 2025-12-01 09:47:05.973903603 +0000 UTC m=+0.663094820 container remove 413751d501da780d2a3a1e2fcfea631b5eaaf02b9f698f0dfb99b7e002eab6f8 (image=quay.io/ceph/ceph:v19, name=elated_mclean, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  1 04:47:05 np0005540825 systemd[1]: libpod-conmon-413751d501da780d2a3a1e2fcfea631b5eaaf02b9f698f0dfb99b7e002eab6f8.scope: Deactivated successfully.
Dec  1 04:47:06 np0005540825 podman[75232]: 2025-12-01 09:47:06.042515632 +0000 UTC m=+0.047360317 container create 615cdb4e4969c0c366290bcdf9385f02056767734816011a82505165e1579188 (image=quay.io/ceph/ceph:v19, name=great_thompson, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:47:06 np0005540825 systemd[1]: Started libpod-conmon-615cdb4e4969c0c366290bcdf9385f02056767734816011a82505165e1579188.scope.
Dec  1 04:47:06 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:47:06 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4efff915170ed2d4a52d30928eb3c091b0842a78649f70d54cc8795a5cbc2b9c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:06 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4efff915170ed2d4a52d30928eb3c091b0842a78649f70d54cc8795a5cbc2b9c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:06 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4efff915170ed2d4a52d30928eb3c091b0842a78649f70d54cc8795a5cbc2b9c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:06 np0005540825 podman[75232]: 2025-12-01 09:47:06.01526718 +0000 UTC m=+0.020111905 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:47:06 np0005540825 podman[75232]: 2025-12-01 09:47:06.118072574 +0000 UTC m=+0.122917229 container init 615cdb4e4969c0c366290bcdf9385f02056767734816011a82505165e1579188 (image=quay.io/ceph/ceph:v19, name=great_thompson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  1 04:47:06 np0005540825 podman[75232]: 2025-12-01 09:47:06.123430726 +0000 UTC m=+0.128275371 container start 615cdb4e4969c0c366290bcdf9385f02056767734816011a82505165e1579188 (image=quay.io/ceph/ceph:v19, name=great_thompson, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec  1 04:47:06 np0005540825 podman[75232]: 2025-12-01 09:47:06.1262189 +0000 UTC m=+0.131063565 container attach 615cdb4e4969c0c366290bcdf9385f02056767734816011a82505165e1579188 (image=quay.io/ceph/ceph:v19, name=great_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:47:06 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/3579193755' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec  1 04:47:06 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'crash'
Dec  1 04:47:06 np0005540825 ceph-mgr[74709]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  1 04:47:06 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'dashboard'
Dec  1 04:47:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:47:06.323+0000 7f98d54aa140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  1 04:47:06 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'devicehealth'
Dec  1 04:47:06 np0005540825 ceph-mgr[74709]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  1 04:47:06 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'diskprediction_local'
Dec  1 04:47:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:47:06.937+0000 7f98d54aa140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  1 04:47:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  1 04:47:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  1 04:47:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]:  from numpy import show_config as show_numpy_config
Dec  1 04:47:07 np0005540825 ceph-mgr[74709]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  1 04:47:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:47:07.088+0000 7f98d54aa140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  1 04:47:07 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'influx'
Dec  1 04:47:07 np0005540825 ceph-mgr[74709]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  1 04:47:07 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'insights'
Dec  1 04:47:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:47:07.154+0000 7f98d54aa140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  1 04:47:07 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'iostat'
Dec  1 04:47:07 np0005540825 ceph-mgr[74709]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  1 04:47:07 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'k8sevents'
Dec  1 04:47:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:47:07.280+0000 7f98d54aa140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  1 04:47:07 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'localpool'
Dec  1 04:47:07 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'mds_autoscaler'
Dec  1 04:47:07 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'mirroring'
Dec  1 04:47:08 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'nfs'
Dec  1 04:47:08 np0005540825 ceph-mgr[74709]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  1 04:47:08 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'orchestrator'
Dec  1 04:47:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:47:08.251+0000 7f98d54aa140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  1 04:47:08 np0005540825 ceph-mgr[74709]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  1 04:47:08 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'osd_perf_query'
Dec  1 04:47:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:47:08.462+0000 7f98d54aa140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  1 04:47:08 np0005540825 ceph-mgr[74709]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  1 04:47:08 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'osd_support'
Dec  1 04:47:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:47:08.536+0000 7f98d54aa140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  1 04:47:08 np0005540825 ceph-mgr[74709]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  1 04:47:08 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'pg_autoscaler'
Dec  1 04:47:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:47:08.600+0000 7f98d54aa140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  1 04:47:08 np0005540825 ceph-mgr[74709]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  1 04:47:08 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'progress'
Dec  1 04:47:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:47:08.695+0000 7f98d54aa140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  1 04:47:08 np0005540825 ceph-mgr[74709]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  1 04:47:08 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'prometheus'
Dec  1 04:47:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:47:08.767+0000 7f98d54aa140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  1 04:47:09 np0005540825 ceph-mgr[74709]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  1 04:47:09 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'rbd_support'
Dec  1 04:47:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:47:09.093+0000 7f98d54aa140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  1 04:47:09 np0005540825 ceph-mgr[74709]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  1 04:47:09 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'restful'
Dec  1 04:47:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:47:09.186+0000 7f98d54aa140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  1 04:47:09 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'rgw'
Dec  1 04:47:09 np0005540825 ceph-mgr[74709]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  1 04:47:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:47:09.587+0000 7f98d54aa140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  1 04:47:09 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'rook'
Dec  1 04:47:10 np0005540825 ceph-mgr[74709]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  1 04:47:10 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'selftest'
Dec  1 04:47:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:47:10.157+0000 7f98d54aa140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  1 04:47:10 np0005540825 ceph-mgr[74709]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  1 04:47:10 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'snap_schedule'
Dec  1 04:47:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:47:10.224+0000 7f98d54aa140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  1 04:47:10 np0005540825 ceph-mgr[74709]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  1 04:47:10 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'stats'
Dec  1 04:47:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:47:10.297+0000 7f98d54aa140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  1 04:47:10 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'status'
Dec  1 04:47:10 np0005540825 ceph-mgr[74709]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec  1 04:47:10 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'telegraf'
Dec  1 04:47:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:47:10.430+0000 7f98d54aa140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec  1 04:47:10 np0005540825 ceph-mgr[74709]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  1 04:47:10 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'telemetry'
Dec  1 04:47:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:47:10.501+0000 7f98d54aa140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  1 04:47:10 np0005540825 ceph-mgr[74709]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  1 04:47:10 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'test_orchestrator'
Dec  1 04:47:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:47:10.651+0000 7f98d54aa140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  1 04:47:10 np0005540825 ceph-mgr[74709]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  1 04:47:10 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'volumes'
Dec  1 04:47:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:47:10.859+0000 7f98d54aa140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'zabbix'
Dec  1 04:47:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:47:11.105+0000 7f98d54aa140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  1 04:47:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:47:11.174+0000 7f98d54aa140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  1 04:47:11 np0005540825 ceph-mon[74416]: log_channel(cluster) log [INF] : Active manager daemon compute-0.fospow restarted
Dec  1 04:47:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Dec  1 04:47:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  1 04:47:11 np0005540825 ceph-mon[74416]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.fospow
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: ms_deliver_dispatch: unhandled message 0x55e9f162ed00 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec  1 04:47:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec  1 04:47:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec  1 04:47:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: mgr handle_mgr_map Activating!
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: mgr handle_mgr_map I am now activating
Dec  1 04:47:11 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Dec  1 04:47:11 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.fospow(active, starting, since 0.0167263s)
Dec  1 04:47:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  1 04:47:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  1 04:47:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.fospow", "id": "compute-0.fospow"} v 0)
Dec  1 04:47:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mgr metadata", "who": "compute-0.fospow", "id": "compute-0.fospow"}]: dispatch
Dec  1 04:47:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec  1 04:47:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec  1 04:47:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).mds e1 all = 1
Dec  1 04:47:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec  1 04:47:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec  1 04:47:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec  1 04:47:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: balancer
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:47:11 np0005540825 ceph-mon[74416]: log_channel(cluster) log [INF] : Manager daemon compute-0.fospow is now available
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: [balancer INFO root] Starting
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_09:47:11
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: [balancer INFO root] No pools available
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Dec  1 04:47:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Dec  1 04:47:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Dec  1 04:47:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: cephadm
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: crash
Dec  1 04:47:11 np0005540825 ceph-mon[74416]: Active manager daemon compute-0.fospow restarted
Dec  1 04:47:11 np0005540825 ceph-mon[74416]: Activating manager daemon compute-0.fospow
Dec  1 04:47:11 np0005540825 ceph-mon[74416]: Manager daemon compute-0.fospow is now available
Dec  1 04:47:11 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: devicehealth
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: iostat
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: [devicehealth INFO root] Starting
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: nfs
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: orchestrator
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: pg_autoscaler
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: progress
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: [progress INFO root] Loading...
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: [progress INFO root] No stored events to load
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: [progress INFO root] Loaded [] historic events
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: [progress INFO root] Loaded OSDMap, ready.
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:47:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec  1 04:47:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  1 04:47:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec  1 04:47:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] recovery thread starting
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] starting setup
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: rbd_support
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: restful
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: [restful INFO root] server_addr: :: server_port: 8003
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: status
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: [restful WARNING root] server not running: no certificate configured
Dec  1 04:47:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fospow/mirror_snapshot_schedule"} v 0)
Dec  1 04:47:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fospow/mirror_snapshot_schedule"}]: dispatch
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: telemetry
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] PerfHandler: starting
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TaskHandler: starting
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:47:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fospow/trash_purge_schedule"} v 0)
Dec  1 04:47:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fospow/trash_purge_schedule"}]: dispatch
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] setup complete
Dec  1 04:47:11 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: volumes
Dec  1 04:47:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.agent_endpoint_root_cert}] v 0)
Dec  1 04:47:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.agent_endpoint_key}] v 0)
Dec  1 04:47:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:12 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Dec  1 04:47:12 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.fospow(active, since 1.03111s)
Dec  1 04:47:12 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Dec  1 04:47:12 np0005540825 great_thompson[75249]: {
Dec  1 04:47:12 np0005540825 great_thompson[75249]:    "mgrmap_epoch": 7,
Dec  1 04:47:12 np0005540825 great_thompson[75249]:    "initialized": true
Dec  1 04:47:12 np0005540825 great_thompson[75249]: }
Dec  1 04:47:12 np0005540825 ceph-mon[74416]: Found migration_current of "None". Setting to last migration.
Dec  1 04:47:12 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:12 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fospow/mirror_snapshot_schedule"}]: dispatch
Dec  1 04:47:12 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fospow/trash_purge_schedule"}]: dispatch
Dec  1 04:47:12 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:12 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:12 np0005540825 systemd[1]: libpod-615cdb4e4969c0c366290bcdf9385f02056767734816011a82505165e1579188.scope: Deactivated successfully.
Dec  1 04:47:12 np0005540825 podman[75232]: 2025-12-01 09:47:12.248669612 +0000 UTC m=+6.253514257 container died 615cdb4e4969c0c366290bcdf9385f02056767734816011a82505165e1579188 (image=quay.io/ceph/ceph:v19, name=great_thompson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  1 04:47:12 np0005540825 systemd[1]: var-lib-containers-storage-overlay-4efff915170ed2d4a52d30928eb3c091b0842a78649f70d54cc8795a5cbc2b9c-merged.mount: Deactivated successfully.
Dec  1 04:47:12 np0005540825 podman[75232]: 2025-12-01 09:47:12.288603111 +0000 UTC m=+6.293447746 container remove 615cdb4e4969c0c366290bcdf9385f02056767734816011a82505165e1579188 (image=quay.io/ceph/ceph:v19, name=great_thompson, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  1 04:47:12 np0005540825 systemd[1]: libpod-conmon-615cdb4e4969c0c366290bcdf9385f02056767734816011a82505165e1579188.scope: Deactivated successfully.
Dec  1 04:47:12 np0005540825 podman[75399]: 2025-12-01 09:47:12.354490011 +0000 UTC m=+0.044599672 container create 3306bf5fd27fe4120d3504369292d9614e01d413e1dceb8faeba6c88d5498ec7 (image=quay.io/ceph/ceph:v19, name=competent_ptolemy, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  1 04:47:12 np0005540825 systemd[1]: Started libpod-conmon-3306bf5fd27fe4120d3504369292d9614e01d413e1dceb8faeba6c88d5498ec7.scope.
Dec  1 04:47:12 np0005540825 podman[75399]: 2025-12-01 09:47:12.334592087 +0000 UTC m=+0.024701728 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:47:12 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:47:12 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/779fe9cdb83bb1279708bae65e5b310ebb4d634c575dffda817675b22c14e47f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:12 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/779fe9cdb83bb1279708bae65e5b310ebb4d634c575dffda817675b22c14e47f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:12 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/779fe9cdb83bb1279708bae65e5b310ebb4d634c575dffda817675b22c14e47f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:12 np0005540825 podman[75399]: 2025-12-01 09:47:12.475158612 +0000 UTC m=+0.165268323 container init 3306bf5fd27fe4120d3504369292d9614e01d413e1dceb8faeba6c88d5498ec7 (image=quay.io/ceph/ceph:v19, name=competent_ptolemy, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:47:12 np0005540825 podman[75399]: 2025-12-01 09:47:12.484917004 +0000 UTC m=+0.175026635 container start 3306bf5fd27fe4120d3504369292d9614e01d413e1dceb8faeba6c88d5498ec7 (image=quay.io/ceph/ceph:v19, name=competent_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:47:12 np0005540825 podman[75399]: 2025-12-01 09:47:12.488574128 +0000 UTC m=+0.178683799 container attach 3306bf5fd27fe4120d3504369292d9614e01d413e1dceb8faeba6c88d5498ec7 (image=quay.io/ceph/ceph:v19, name=competent_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:47:12 np0005540825 ceph-mgr[74709]: [cephadm INFO cherrypy.error] [01/Dec/2025:09:47:12] ENGINE Bus STARTING
Dec  1 04:47:12 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : [01/Dec/2025:09:47:12] ENGINE Bus STARTING
Dec  1 04:47:12 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 04:47:12 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Dec  1 04:47:12 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:12 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec  1 04:47:12 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  1 04:47:12 np0005540825 systemd[1]: libpod-3306bf5fd27fe4120d3504369292d9614e01d413e1dceb8faeba6c88d5498ec7.scope: Deactivated successfully.
Dec  1 04:47:12 np0005540825 podman[75399]: 2025-12-01 09:47:12.919999084 +0000 UTC m=+0.610108725 container died 3306bf5fd27fe4120d3504369292d9614e01d413e1dceb8faeba6c88d5498ec7 (image=quay.io/ceph/ceph:v19, name=competent_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  1 04:47:12 np0005540825 systemd[1]: var-lib-containers-storage-overlay-779fe9cdb83bb1279708bae65e5b310ebb4d634c575dffda817675b22c14e47f-merged.mount: Deactivated successfully.
Dec  1 04:47:12 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019926308 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:47:12 np0005540825 podman[75399]: 2025-12-01 09:47:12.965735834 +0000 UTC m=+0.655845465 container remove 3306bf5fd27fe4120d3504369292d9614e01d413e1dceb8faeba6c88d5498ec7 (image=quay.io/ceph/ceph:v19, name=competent_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec  1 04:47:12 np0005540825 systemd[1]: libpod-conmon-3306bf5fd27fe4120d3504369292d9614e01d413e1dceb8faeba6c88d5498ec7.scope: Deactivated successfully.
Dec  1 04:47:12 np0005540825 ceph-mgr[74709]: [cephadm INFO cherrypy.error] [01/Dec/2025:09:47:12] ENGINE Serving on https://192.168.122.100:7150
Dec  1 04:47:12 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : [01/Dec/2025:09:47:12] ENGINE Serving on https://192.168.122.100:7150
Dec  1 04:47:12 np0005540825 ceph-mgr[74709]: [cephadm INFO cherrypy.error] [01/Dec/2025:09:47:12] ENGINE Client ('192.168.122.100', 46792) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  1 04:47:12 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : [01/Dec/2025:09:47:12] ENGINE Client ('192.168.122.100', 46792) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  1 04:47:13 np0005540825 podman[75466]: 2025-12-01 09:47:13.046976619 +0000 UTC m=+0.057031322 container create 07485f6eea7f0a59d3c6e7a2c1d952dcea48b3ccab3996478c400c7af323eec6 (image=quay.io/ceph/ceph:v19, name=sharp_galileo, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  1 04:47:13 np0005540825 ceph-mgr[74709]: [cephadm INFO cherrypy.error] [01/Dec/2025:09:47:13] ENGINE Serving on http://192.168.122.100:8765
Dec  1 04:47:13 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : [01/Dec/2025:09:47:13] ENGINE Serving on http://192.168.122.100:8765
Dec  1 04:47:13 np0005540825 ceph-mgr[74709]: [cephadm INFO cherrypy.error] [01/Dec/2025:09:47:13] ENGINE Bus STARTED
Dec  1 04:47:13 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : [01/Dec/2025:09:47:13] ENGINE Bus STARTED
Dec  1 04:47:13 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec  1 04:47:13 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  1 04:47:13 np0005540825 systemd[1]: Started libpod-conmon-07485f6eea7f0a59d3c6e7a2c1d952dcea48b3ccab3996478c400c7af323eec6.scope.
Dec  1 04:47:13 np0005540825 podman[75466]: 2025-12-01 09:47:13.02067624 +0000 UTC m=+0.030731003 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:47:13 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:47:13 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7febdc96ae09aec63ecd11c9c0d57823eb70ff542e5d8e98e3a48b3123ee7e68/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:13 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7febdc96ae09aec63ecd11c9c0d57823eb70ff542e5d8e98e3a48b3123ee7e68/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:13 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7febdc96ae09aec63ecd11c9c0d57823eb70ff542e5d8e98e3a48b3123ee7e68/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:13 np0005540825 podman[75466]: 2025-12-01 09:47:13.145999412 +0000 UTC m=+0.156054115 container init 07485f6eea7f0a59d3c6e7a2c1d952dcea48b3ccab3996478c400c7af323eec6 (image=quay.io/ceph/ceph:v19, name=sharp_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  1 04:47:13 np0005540825 podman[75466]: 2025-12-01 09:47:13.155144008 +0000 UTC m=+0.165198711 container start 07485f6eea7f0a59d3c6e7a2c1d952dcea48b3ccab3996478c400c7af323eec6 (image=quay.io/ceph/ceph:v19, name=sharp_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:47:13 np0005540825 podman[75466]: 2025-12-01 09:47:13.159001298 +0000 UTC m=+0.169056001 container attach 07485f6eea7f0a59d3c6e7a2c1d952dcea48b3ccab3996478c400c7af323eec6 (image=quay.io/ceph/ceph:v19, name=sharp_galileo, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Dec  1 04:47:13 np0005540825 ceph-mgr[74709]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  1 04:47:13 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:13 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 04:47:13 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Dec  1 04:47:13 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:13 np0005540825 ceph-mgr[74709]: [cephadm INFO root] Set ssh ssh_user
Dec  1 04:47:13 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Dec  1 04:47:13 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Dec  1 04:47:13 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:13 np0005540825 ceph-mgr[74709]: [cephadm INFO root] Set ssh ssh_config
Dec  1 04:47:13 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Dec  1 04:47:13 np0005540825 ceph-mgr[74709]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Dec  1 04:47:13 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Dec  1 04:47:13 np0005540825 sharp_galileo[75494]: ssh user set to ceph-admin. sudo will be used
Dec  1 04:47:13 np0005540825 systemd[1]: libpod-07485f6eea7f0a59d3c6e7a2c1d952dcea48b3ccab3996478c400c7af323eec6.scope: Deactivated successfully.
Dec  1 04:47:13 np0005540825 podman[75466]: 2025-12-01 09:47:13.537466306 +0000 UTC m=+0.547520999 container died 07485f6eea7f0a59d3c6e7a2c1d952dcea48b3ccab3996478c400c7af323eec6 (image=quay.io/ceph/ceph:v19, name=sharp_galileo, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec  1 04:47:13 np0005540825 systemd[1]: var-lib-containers-storage-overlay-7febdc96ae09aec63ecd11c9c0d57823eb70ff542e5d8e98e3a48b3123ee7e68-merged.mount: Deactivated successfully.
Dec  1 04:47:13 np0005540825 podman[75466]: 2025-12-01 09:47:13.584166531 +0000 UTC m=+0.594221234 container remove 07485f6eea7f0a59d3c6e7a2c1d952dcea48b3ccab3996478c400c7af323eec6 (image=quay.io/ceph/ceph:v19, name=sharp_galileo, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:47:13 np0005540825 systemd[1]: libpod-conmon-07485f6eea7f0a59d3c6e7a2c1d952dcea48b3ccab3996478c400c7af323eec6.scope: Deactivated successfully.
Dec  1 04:47:13 np0005540825 podman[75531]: 2025-12-01 09:47:13.678796231 +0000 UTC m=+0.063286583 container create 1bfc16500c59824210e011b49029fb55cbb46a7c16099e3f51ef64c90e39272d (image=quay.io/ceph/ceph:v19, name=heuristic_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:47:13 np0005540825 systemd[1]: Started libpod-conmon-1bfc16500c59824210e011b49029fb55cbb46a7c16099e3f51ef64c90e39272d.scope.
Dec  1 04:47:13 np0005540825 podman[75531]: 2025-12-01 09:47:13.643937522 +0000 UTC m=+0.028427974 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:47:13 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:47:13 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06b4951e577c244a80a9e7b3d221816e44c4f60e67735257807bb080eafcea80/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:13 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06b4951e577c244a80a9e7b3d221816e44c4f60e67735257807bb080eafcea80/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:13 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06b4951e577c244a80a9e7b3d221816e44c4f60e67735257807bb080eafcea80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:13 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06b4951e577c244a80a9e7b3d221816e44c4f60e67735257807bb080eafcea80/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:13 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06b4951e577c244a80a9e7b3d221816e44c4f60e67735257807bb080eafcea80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:13 np0005540825 podman[75531]: 2025-12-01 09:47:13.78576265 +0000 UTC m=+0.170253082 container init 1bfc16500c59824210e011b49029fb55cbb46a7c16099e3f51ef64c90e39272d (image=quay.io/ceph/ceph:v19, name=heuristic_engelbart, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:47:13 np0005540825 podman[75531]: 2025-12-01 09:47:13.791559109 +0000 UTC m=+0.176049501 container start 1bfc16500c59824210e011b49029fb55cbb46a7c16099e3f51ef64c90e39272d (image=quay.io/ceph/ceph:v19, name=heuristic_engelbart, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 04:47:13 np0005540825 podman[75531]: 2025-12-01 09:47:13.795688516 +0000 UTC m=+0.180178868 container attach 1bfc16500c59824210e011b49029fb55cbb46a7c16099e3f51ef64c90e39272d (image=quay.io/ceph/ceph:v19, name=heuristic_engelbart, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec  1 04:47:13 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.fospow(active, since 2s)
Dec  1 04:47:14 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 04:47:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Dec  1 04:47:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:14 np0005540825 ceph-mgr[74709]: [cephadm INFO root] Set ssh ssh_identity_key
Dec  1 04:47:14 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Dec  1 04:47:14 np0005540825 ceph-mgr[74709]: [cephadm INFO root] Set ssh private key
Dec  1 04:47:14 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Set ssh private key
Dec  1 04:47:14 np0005540825 systemd[1]: libpod-1bfc16500c59824210e011b49029fb55cbb46a7c16099e3f51ef64c90e39272d.scope: Deactivated successfully.
Dec  1 04:47:14 np0005540825 podman[75531]: 2025-12-01 09:47:14.212850244 +0000 UTC m=+0.597340596 container died 1bfc16500c59824210e011b49029fb55cbb46a7c16099e3f51ef64c90e39272d (image=quay.io/ceph/ceph:v19, name=heuristic_engelbart, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  1 04:47:14 np0005540825 systemd[1]: var-lib-containers-storage-overlay-06b4951e577c244a80a9e7b3d221816e44c4f60e67735257807bb080eafcea80-merged.mount: Deactivated successfully.
Dec  1 04:47:14 np0005540825 podman[75531]: 2025-12-01 09:47:14.253982564 +0000 UTC m=+0.638472926 container remove 1bfc16500c59824210e011b49029fb55cbb46a7c16099e3f51ef64c90e39272d (image=quay.io/ceph/ceph:v19, name=heuristic_engelbart, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec  1 04:47:14 np0005540825 ceph-mon[74416]: [01/Dec/2025:09:47:12] ENGINE Bus STARTING
Dec  1 04:47:14 np0005540825 ceph-mon[74416]: [01/Dec/2025:09:47:12] ENGINE Serving on https://192.168.122.100:7150
Dec  1 04:47:14 np0005540825 ceph-mon[74416]: [01/Dec/2025:09:47:12] ENGINE Client ('192.168.122.100', 46792) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  1 04:47:14 np0005540825 ceph-mon[74416]: [01/Dec/2025:09:47:13] ENGINE Serving on http://192.168.122.100:8765
Dec  1 04:47:14 np0005540825 ceph-mon[74416]: [01/Dec/2025:09:47:13] ENGINE Bus STARTED
Dec  1 04:47:14 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:14 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:14 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:14 np0005540825 systemd[1]: libpod-conmon-1bfc16500c59824210e011b49029fb55cbb46a7c16099e3f51ef64c90e39272d.scope: Deactivated successfully.
Dec  1 04:47:14 np0005540825 podman[75584]: 2025-12-01 09:47:14.327412638 +0000 UTC m=+0.051704344 container create 0350e114bef9a43167a1346743c60950621cfb574ca0d3d77643bb9110b638ea (image=quay.io/ceph/ceph:v19, name=kind_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  1 04:47:14 np0005540825 systemd[1]: Started libpod-conmon-0350e114bef9a43167a1346743c60950621cfb574ca0d3d77643bb9110b638ea.scope.
Dec  1 04:47:14 np0005540825 podman[75584]: 2025-12-01 09:47:14.302037044 +0000 UTC m=+0.026328760 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:47:14 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:47:14 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f067b3ccdee1e4bc11d2dcf42700ea0194e7c1a78b4f07fdc428478e5f0cc4f8/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:14 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f067b3ccdee1e4bc11d2dcf42700ea0194e7c1a78b4f07fdc428478e5f0cc4f8/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:14 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f067b3ccdee1e4bc11d2dcf42700ea0194e7c1a78b4f07fdc428478e5f0cc4f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:14 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f067b3ccdee1e4bc11d2dcf42700ea0194e7c1a78b4f07fdc428478e5f0cc4f8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:14 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f067b3ccdee1e4bc11d2dcf42700ea0194e7c1a78b4f07fdc428478e5f0cc4f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:14 np0005540825 podman[75584]: 2025-12-01 09:47:14.431742408 +0000 UTC m=+0.156034194 container init 0350e114bef9a43167a1346743c60950621cfb574ca0d3d77643bb9110b638ea (image=quay.io/ceph/ceph:v19, name=kind_yalow, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:47:14 np0005540825 podman[75584]: 2025-12-01 09:47:14.448508041 +0000 UTC m=+0.172799747 container start 0350e114bef9a43167a1346743c60950621cfb574ca0d3d77643bb9110b638ea (image=quay.io/ceph/ceph:v19, name=kind_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec  1 04:47:14 np0005540825 podman[75584]: 2025-12-01 09:47:14.452548555 +0000 UTC m=+0.176840261 container attach 0350e114bef9a43167a1346743c60950621cfb574ca0d3d77643bb9110b638ea (image=quay.io/ceph/ceph:v19, name=kind_yalow, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec  1 04:47:14 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 04:47:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Dec  1 04:47:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:14 np0005540825 ceph-mgr[74709]: [cephadm INFO root] Set ssh ssh_identity_pub
Dec  1 04:47:14 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Dec  1 04:47:14 np0005540825 systemd[1]: libpod-0350e114bef9a43167a1346743c60950621cfb574ca0d3d77643bb9110b638ea.scope: Deactivated successfully.
Dec  1 04:47:14 np0005540825 podman[75584]: 2025-12-01 09:47:14.859353246 +0000 UTC m=+0.583644922 container died 0350e114bef9a43167a1346743c60950621cfb574ca0d3d77643bb9110b638ea (image=quay.io/ceph/ceph:v19, name=kind_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  1 04:47:14 np0005540825 systemd[1]: var-lib-containers-storage-overlay-f067b3ccdee1e4bc11d2dcf42700ea0194e7c1a78b4f07fdc428478e5f0cc4f8-merged.mount: Deactivated successfully.
Dec  1 04:47:14 np0005540825 podman[75584]: 2025-12-01 09:47:14.898770662 +0000 UTC m=+0.623062338 container remove 0350e114bef9a43167a1346743c60950621cfb574ca0d3d77643bb9110b638ea (image=quay.io/ceph/ceph:v19, name=kind_yalow, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 04:47:14 np0005540825 systemd[1]: libpod-conmon-0350e114bef9a43167a1346743c60950621cfb574ca0d3d77643bb9110b638ea.scope: Deactivated successfully.
Dec  1 04:47:14 np0005540825 podman[75638]: 2025-12-01 09:47:14.993454874 +0000 UTC m=+0.064930545 container create 60fd6f3476d7ac177510db3219000bd2180c190f7c97d78721aefc5b6ddf7f41 (image=quay.io/ceph/ceph:v19, name=hardcore_agnesi, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  1 04:47:15 np0005540825 systemd[1]: Started libpod-conmon-60fd6f3476d7ac177510db3219000bd2180c190f7c97d78721aefc5b6ddf7f41.scope.
Dec  1 04:47:15 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:47:15 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91b8947255eff8f12af54b5a6ee3e5e5ae3185121bb6afbf0972b27edd2c2947/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:15 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91b8947255eff8f12af54b5a6ee3e5e5ae3185121bb6afbf0972b27edd2c2947/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:15 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91b8947255eff8f12af54b5a6ee3e5e5ae3185121bb6afbf0972b27edd2c2947/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:15 np0005540825 podman[75638]: 2025-12-01 09:47:15.062840203 +0000 UTC m=+0.134315924 container init 60fd6f3476d7ac177510db3219000bd2180c190f7c97d78721aefc5b6ddf7f41 (image=quay.io/ceph/ceph:v19, name=hardcore_agnesi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:47:15 np0005540825 podman[75638]: 2025-12-01 09:47:14.970502842 +0000 UTC m=+0.041978513 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:47:15 np0005540825 podman[75638]: 2025-12-01 09:47:15.078193109 +0000 UTC m=+0.149668740 container start 60fd6f3476d7ac177510db3219000bd2180c190f7c97d78721aefc5b6ddf7f41 (image=quay.io/ceph/ceph:v19, name=hardcore_agnesi, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec  1 04:47:15 np0005540825 podman[75638]: 2025-12-01 09:47:15.082353617 +0000 UTC m=+0.153829248 container attach 60fd6f3476d7ac177510db3219000bd2180c190f7c97d78721aefc5b6ddf7f41 (image=quay.io/ceph/ceph:v19, name=hardcore_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:47:15 np0005540825 ceph-mgr[74709]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  1 04:47:15 np0005540825 ceph-mon[74416]: Set ssh ssh_user
Dec  1 04:47:15 np0005540825 ceph-mon[74416]: Set ssh ssh_config
Dec  1 04:47:15 np0005540825 ceph-mon[74416]: ssh user set to ceph-admin. sudo will be used
Dec  1 04:47:15 np0005540825 ceph-mon[74416]: Set ssh ssh_identity_key
Dec  1 04:47:15 np0005540825 ceph-mon[74416]: Set ssh private key
Dec  1 04:47:15 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:15 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 04:47:15 np0005540825 hardcore_agnesi[75655]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCbwLMirZn/yWhqRLRGE8wa/ywOHJBadUkha1DGavZhV/em3WEAWAICALY2XIm324FOCMuQHmMsAL0RsHHNU/8/mbjuihxUPskXBcBs6DwisGYO7v9FFgo/08qnG797y/zSK6LDWKTedrZhvUNTWaMwiZGbjwiXcU4c5g8aTr+bRGYQ9pmbZXbWlk/c1LK6eNAXsHKOPHpTSHuwJcu/g5pGGsncsa9knuxeWHggE8miOgZh7jFI395TvKzwcsB14AX8j2FqIqEyCklaNosSQyNfgWIGDA8AZWF3JMoDa0q2WJ1A8QK1ce6f+6uqC8h6FQmdaR/7qgpjOCATkHge/EmDEqeJlxr58KZPHMk8gZhV8kieICRlWN7Xy+p6YB9Lstlkf/KEvANJBRiL9qpiZ1WjsmpSWfZfnNYdVozrv1hM+KB/FUgFQU5cevo0QiIA5p5CpN9C2g3kBZLhPewguhdvCW2khQYhy1g9KhLKUm2MKkAwU289gs8CVB66OAtIkfs= zuul@controller
Dec  1 04:47:15 np0005540825 systemd[1]: libpod-60fd6f3476d7ac177510db3219000bd2180c190f7c97d78721aefc5b6ddf7f41.scope: Deactivated successfully.
Dec  1 04:47:15 np0005540825 podman[75638]: 2025-12-01 09:47:15.475046063 +0000 UTC m=+0.546521704 container died 60fd6f3476d7ac177510db3219000bd2180c190f7c97d78721aefc5b6ddf7f41 (image=quay.io/ceph/ceph:v19, name=hardcore_agnesi, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec  1 04:47:15 np0005540825 systemd[1]: var-lib-containers-storage-overlay-91b8947255eff8f12af54b5a6ee3e5e5ae3185121bb6afbf0972b27edd2c2947-merged.mount: Deactivated successfully.
Dec  1 04:47:15 np0005540825 podman[75638]: 2025-12-01 09:47:15.524573441 +0000 UTC m=+0.596049122 container remove 60fd6f3476d7ac177510db3219000bd2180c190f7c97d78721aefc5b6ddf7f41 (image=quay.io/ceph/ceph:v19, name=hardcore_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  1 04:47:15 np0005540825 systemd[1]: libpod-conmon-60fd6f3476d7ac177510db3219000bd2180c190f7c97d78721aefc5b6ddf7f41.scope: Deactivated successfully.
Dec  1 04:47:15 np0005540825 podman[75694]: 2025-12-01 09:47:15.59318895 +0000 UTC m=+0.048387599 container create e33cf38a3f849c8218ddba3c007cdad10af1c9682ac39eabc8baf5d2d973ee6b (image=quay.io/ceph/ceph:v19, name=nervous_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  1 04:47:15 np0005540825 systemd[1]: Started libpod-conmon-e33cf38a3f849c8218ddba3c007cdad10af1c9682ac39eabc8baf5d2d973ee6b.scope.
Dec  1 04:47:15 np0005540825 podman[75694]: 2025-12-01 09:47:15.566287306 +0000 UTC m=+0.021485945 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:47:15 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:47:15 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84dbe040ffc95d5560cd1dca1702547a1d0d9a0d1cbd6eb764dacde730700bb0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:15 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84dbe040ffc95d5560cd1dca1702547a1d0d9a0d1cbd6eb764dacde730700bb0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:15 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84dbe040ffc95d5560cd1dca1702547a1d0d9a0d1cbd6eb764dacde730700bb0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:15 np0005540825 podman[75694]: 2025-12-01 09:47:15.692990534 +0000 UTC m=+0.148189153 container init e33cf38a3f849c8218ddba3c007cdad10af1c9682ac39eabc8baf5d2d973ee6b (image=quay.io/ceph/ceph:v19, name=nervous_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:47:15 np0005540825 podman[75694]: 2025-12-01 09:47:15.704210593 +0000 UTC m=+0.159409242 container start e33cf38a3f849c8218ddba3c007cdad10af1c9682ac39eabc8baf5d2d973ee6b (image=quay.io/ceph/ceph:v19, name=nervous_keller, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1)
Dec  1 04:47:15 np0005540825 podman[75694]: 2025-12-01 09:47:15.709534851 +0000 UTC m=+0.164733480 container attach e33cf38a3f849c8218ddba3c007cdad10af1c9682ac39eabc8baf5d2d973ee6b (image=quay.io/ceph/ceph:v19, name=nervous_keller, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  1 04:47:16 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 04:47:16 np0005540825 ceph-mon[74416]: Set ssh ssh_identity_pub
Dec  1 04:47:16 np0005540825 systemd[1]: Created slice User Slice of UID 42477.
Dec  1 04:47:16 np0005540825 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec  1 04:47:16 np0005540825 systemd-logind[789]: New session 21 of user ceph-admin.
Dec  1 04:47:16 np0005540825 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec  1 04:47:16 np0005540825 systemd[1]: Starting User Manager for UID 42477...
Dec  1 04:47:16 np0005540825 systemd[75739]: Queued start job for default target Main User Target.
Dec  1 04:47:16 np0005540825 systemd[75739]: Created slice User Application Slice.
Dec  1 04:47:16 np0005540825 systemd[75739]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  1 04:47:16 np0005540825 systemd[75739]: Started Daily Cleanup of User's Temporary Directories.
Dec  1 04:47:16 np0005540825 systemd[75739]: Reached target Paths.
Dec  1 04:47:16 np0005540825 systemd[75739]: Reached target Timers.
Dec  1 04:47:16 np0005540825 systemd[75739]: Starting D-Bus User Message Bus Socket...
Dec  1 04:47:16 np0005540825 systemd[75739]: Starting Create User's Volatile Files and Directories...
Dec  1 04:47:16 np0005540825 systemd[75739]: Listening on D-Bus User Message Bus Socket.
Dec  1 04:47:16 np0005540825 systemd[75739]: Reached target Sockets.
Dec  1 04:47:16 np0005540825 systemd[75739]: Finished Create User's Volatile Files and Directories.
Dec  1 04:47:16 np0005540825 systemd[75739]: Reached target Basic System.
Dec  1 04:47:16 np0005540825 systemd[75739]: Reached target Main User Target.
Dec  1 04:47:16 np0005540825 systemd[75739]: Startup finished in 152ms.
Dec  1 04:47:16 np0005540825 systemd[1]: Started User Manager for UID 42477.
Dec  1 04:47:16 np0005540825 systemd[1]: Started Session 21 of User ceph-admin.
Dec  1 04:47:16 np0005540825 systemd-logind[789]: New session 23 of user ceph-admin.
Dec  1 04:47:16 np0005540825 systemd[1]: Started Session 23 of User ceph-admin.
Dec  1 04:47:16 np0005540825 systemd-logind[789]: New session 24 of user ceph-admin.
Dec  1 04:47:16 np0005540825 systemd[1]: Started Session 24 of User ceph-admin.
Dec  1 04:47:17 np0005540825 ceph-mgr[74709]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  1 04:47:17 np0005540825 systemd-logind[789]: New session 25 of user ceph-admin.
Dec  1 04:47:17 np0005540825 systemd[1]: Started Session 25 of User ceph-admin.
Dec  1 04:47:17 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Dec  1 04:47:17 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Dec  1 04:47:17 np0005540825 systemd-logind[789]: New session 26 of user ceph-admin.
Dec  1 04:47:17 np0005540825 systemd[1]: Started Session 26 of User ceph-admin.
Dec  1 04:47:17 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053114 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:47:18 np0005540825 systemd-logind[789]: New session 27 of user ceph-admin.
Dec  1 04:47:18 np0005540825 systemd[1]: Started Session 27 of User ceph-admin.
Dec  1 04:47:18 np0005540825 ceph-mon[74416]: Deploying cephadm binary to compute-0
Dec  1 04:47:18 np0005540825 systemd-logind[789]: New session 28 of user ceph-admin.
Dec  1 04:47:18 np0005540825 systemd[1]: Started Session 28 of User ceph-admin.
Dec  1 04:47:19 np0005540825 systemd-logind[789]: New session 29 of user ceph-admin.
Dec  1 04:47:19 np0005540825 systemd[1]: Started Session 29 of User ceph-admin.
Dec  1 04:47:19 np0005540825 ceph-mgr[74709]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  1 04:47:19 np0005540825 systemd-logind[789]: New session 30 of user ceph-admin.
Dec  1 04:47:19 np0005540825 systemd[1]: Started Session 30 of User ceph-admin.
Dec  1 04:47:19 np0005540825 systemd-logind[789]: New session 31 of user ceph-admin.
Dec  1 04:47:19 np0005540825 systemd[1]: Started Session 31 of User ceph-admin.
Dec  1 04:47:21 np0005540825 systemd-logind[789]: New session 32 of user ceph-admin.
Dec  1 04:47:21 np0005540825 systemd[1]: Started Session 32 of User ceph-admin.
Dec  1 04:47:21 np0005540825 ceph-mgr[74709]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  1 04:47:21 np0005540825 systemd-logind[789]: New session 33 of user ceph-admin.
Dec  1 04:47:21 np0005540825 systemd[1]: Started Session 33 of User ceph-admin.
Dec  1 04:47:21 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  1 04:47:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:22 np0005540825 ceph-mgr[74709]: [cephadm INFO root] Added host compute-0
Dec  1 04:47:22 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Added host compute-0
Dec  1 04:47:22 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec  1 04:47:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  1 04:47:22 np0005540825 nervous_keller[75709]: Added host 'compute-0' with addr '192.168.122.100'
Dec  1 04:47:22 np0005540825 systemd[1]: libpod-e33cf38a3f849c8218ddba3c007cdad10af1c9682ac39eabc8baf5d2d973ee6b.scope: Deactivated successfully.
Dec  1 04:47:22 np0005540825 podman[76105]: 2025-12-01 09:47:22.33850228 +0000 UTC m=+0.040874975 container died e33cf38a3f849c8218ddba3c007cdad10af1c9682ac39eabc8baf5d2d973ee6b (image=quay.io/ceph/ceph:v19, name=nervous_keller, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  1 04:47:22 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054711 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:47:23 np0005540825 ceph-mgr[74709]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  1 04:47:23 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:23 np0005540825 ceph-mon[74416]: Added host compute-0
Dec  1 04:47:23 np0005540825 systemd[1]: var-lib-containers-storage-overlay-84dbe040ffc95d5560cd1dca1702547a1d0d9a0d1cbd6eb764dacde730700bb0-merged.mount: Deactivated successfully.
Dec  1 04:47:23 np0005540825 podman[76105]: 2025-12-01 09:47:23.440003826 +0000 UTC m=+1.142376491 container remove e33cf38a3f849c8218ddba3c007cdad10af1c9682ac39eabc8baf5d2d973ee6b (image=quay.io/ceph/ceph:v19, name=nervous_keller, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec  1 04:47:23 np0005540825 systemd[1]: libpod-conmon-e33cf38a3f849c8218ddba3c007cdad10af1c9682ac39eabc8baf5d2d973ee6b.scope: Deactivated successfully.
Dec  1 04:47:23 np0005540825 podman[76172]: 2025-12-01 09:47:23.548586146 +0000 UTC m=+0.065018117 container create 7d58aa02428ecea451d6558e4d5c5e888506e839954aa959adf2b26434540a5e (image=quay.io/ceph/ceph:v19, name=gifted_khorana, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  1 04:47:23 np0005540825 systemd[1]: Started libpod-conmon-7d58aa02428ecea451d6558e4d5c5e888506e839954aa959adf2b26434540a5e.scope.
Dec  1 04:47:23 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:47:23 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/912f234b8b6d1aae330fae570c3aaba7deba9cbfef35963908bd8fd6065a75bb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:23 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/912f234b8b6d1aae330fae570c3aaba7deba9cbfef35963908bd8fd6065a75bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:23 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/912f234b8b6d1aae330fae570c3aaba7deba9cbfef35963908bd8fd6065a75bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:23 np0005540825 podman[76172]: 2025-12-01 09:47:23.530817938 +0000 UTC m=+0.047249929 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:47:23 np0005540825 podman[76172]: 2025-12-01 09:47:23.638263469 +0000 UTC m=+0.154695480 container init 7d58aa02428ecea451d6558e4d5c5e888506e839954aa959adf2b26434540a5e (image=quay.io/ceph/ceph:v19, name=gifted_khorana, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  1 04:47:23 np0005540825 podman[76172]: 2025-12-01 09:47:23.650062603 +0000 UTC m=+0.166494614 container start 7d58aa02428ecea451d6558e4d5c5e888506e839954aa959adf2b26434540a5e (image=quay.io/ceph/ceph:v19, name=gifted_khorana, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:47:23 np0005540825 podman[76172]: 2025-12-01 09:47:23.654528598 +0000 UTC m=+0.170960619 container attach 7d58aa02428ecea451d6558e4d5c5e888506e839954aa959adf2b26434540a5e (image=quay.io/ceph/ceph:v19, name=gifted_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:47:24 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 04:47:24 np0005540825 ceph-mgr[74709]: [cephadm INFO root] Saving service mon spec with placement count:5
Dec  1 04:47:24 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Dec  1 04:47:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec  1 04:47:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:24 np0005540825 gifted_khorana[76199]: Scheduled mon update...
Dec  1 04:47:24 np0005540825 systemd[1]: libpod-7d58aa02428ecea451d6558e4d5c5e888506e839954aa959adf2b26434540a5e.scope: Deactivated successfully.
Dec  1 04:47:24 np0005540825 podman[76172]: 2025-12-01 09:47:24.042284077 +0000 UTC m=+0.558716088 container died 7d58aa02428ecea451d6558e4d5c5e888506e839954aa959adf2b26434540a5e (image=quay.io/ceph/ceph:v19, name=gifted_khorana, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:47:24 np0005540825 systemd[1]: var-lib-containers-storage-overlay-912f234b8b6d1aae330fae570c3aaba7deba9cbfef35963908bd8fd6065a75bb-merged.mount: Deactivated successfully.
Dec  1 04:47:24 np0005540825 podman[76172]: 2025-12-01 09:47:24.089670629 +0000 UTC m=+0.606102630 container remove 7d58aa02428ecea451d6558e4d5c5e888506e839954aa959adf2b26434540a5e (image=quay.io/ceph/ceph:v19, name=gifted_khorana, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Dec  1 04:47:24 np0005540825 systemd[1]: libpod-conmon-7d58aa02428ecea451d6558e4d5c5e888506e839954aa959adf2b26434540a5e.scope: Deactivated successfully.
Dec  1 04:47:24 np0005540825 podman[76240]: 2025-12-01 09:47:24.173768148 +0000 UTC m=+0.053690096 container create 39a4085ef7f9a52fb9bbd473f02c1e84c322d7c88d9ae24b04007c9eef7cd26a (image=quay.io/ceph/ceph:v19, name=fervent_lumiere, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:47:24 np0005540825 systemd[1]: Started libpod-conmon-39a4085ef7f9a52fb9bbd473f02c1e84c322d7c88d9ae24b04007c9eef7cd26a.scope.
Dec  1 04:47:24 np0005540825 podman[76174]: 2025-12-01 09:47:24.244595194 +0000 UTC m=+0.748493052 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:47:24 np0005540825 podman[76240]: 2025-12-01 09:47:24.150064516 +0000 UTC m=+0.029986474 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:47:24 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:47:24 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8be39d9901c5e502282bf31c0c00a846e9c7f1dea8b74baeafe7c13a83b8de5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:24 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8be39d9901c5e502282bf31c0c00a846e9c7f1dea8b74baeafe7c13a83b8de5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:24 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8be39d9901c5e502282bf31c0c00a846e9c7f1dea8b74baeafe7c13a83b8de5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:24 np0005540825 podman[76240]: 2025-12-01 09:47:24.285987021 +0000 UTC m=+0.165909019 container init 39a4085ef7f9a52fb9bbd473f02c1e84c322d7c88d9ae24b04007c9eef7cd26a (image=quay.io/ceph/ceph:v19, name=fervent_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:47:24 np0005540825 podman[76240]: 2025-12-01 09:47:24.296120443 +0000 UTC m=+0.176042401 container start 39a4085ef7f9a52fb9bbd473f02c1e84c322d7c88d9ae24b04007c9eef7cd26a (image=quay.io/ceph/ceph:v19, name=fervent_lumiere, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:47:24 np0005540825 podman[76240]: 2025-12-01 09:47:24.299665134 +0000 UTC m=+0.179587072 container attach 39a4085ef7f9a52fb9bbd473f02c1e84c322d7c88d9ae24b04007c9eef7cd26a (image=quay.io/ceph/ceph:v19, name=fervent_lumiere, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:47:24 np0005540825 podman[76275]: 2025-12-01 09:47:24.403683427 +0000 UTC m=+0.062650767 container create 6c296858d1dee99b42e35364acdd2b7431f9d4cccc6a028855c0937abe9ee4ce (image=quay.io/ceph/ceph:v19, name=busy_booth, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:47:24 np0005540825 systemd[1]: Started libpod-conmon-6c296858d1dee99b42e35364acdd2b7431f9d4cccc6a028855c0937abe9ee4ce.scope.
Dec  1 04:47:24 np0005540825 podman[76275]: 2025-12-01 09:47:24.377005869 +0000 UTC m=+0.035973279 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:47:24 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:47:24 np0005540825 podman[76275]: 2025-12-01 09:47:24.495503685 +0000 UTC m=+0.154471085 container init 6c296858d1dee99b42e35364acdd2b7431f9d4cccc6a028855c0937abe9ee4ce (image=quay.io/ceph/ceph:v19, name=busy_booth, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:47:24 np0005540825 podman[76275]: 2025-12-01 09:47:24.50542299 +0000 UTC m=+0.164390320 container start 6c296858d1dee99b42e35364acdd2b7431f9d4cccc6a028855c0937abe9ee4ce (image=quay.io/ceph/ceph:v19, name=busy_booth, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:47:24 np0005540825 podman[76275]: 2025-12-01 09:47:24.509158267 +0000 UTC m=+0.168125687 container attach 6c296858d1dee99b42e35364acdd2b7431f9d4cccc6a028855c0937abe9ee4ce (image=quay.io/ceph/ceph:v19, name=busy_booth, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:47:24 np0005540825 busy_booth[76311]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Dec  1 04:47:24 np0005540825 systemd[1]: libpod-6c296858d1dee99b42e35364acdd2b7431f9d4cccc6a028855c0937abe9ee4ce.scope: Deactivated successfully.
Dec  1 04:47:24 np0005540825 podman[76275]: 2025-12-01 09:47:24.618115657 +0000 UTC m=+0.277083027 container died 6c296858d1dee99b42e35364acdd2b7431f9d4cccc6a028855c0937abe9ee4ce (image=quay.io/ceph/ceph:v19, name=busy_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  1 04:47:24 np0005540825 systemd[1]: var-lib-containers-storage-overlay-1a9bdd841d38526985e0dba8c6ec1e8695d1dd04646af8264361139adde2db18-merged.mount: Deactivated successfully.
Dec  1 04:47:24 np0005540825 podman[76275]: 2025-12-01 09:47:24.659413422 +0000 UTC m=+0.318380792 container remove 6c296858d1dee99b42e35364acdd2b7431f9d4cccc6a028855c0937abe9ee4ce (image=quay.io/ceph/ceph:v19, name=busy_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:47:24 np0005540825 systemd[1]: libpod-conmon-6c296858d1dee99b42e35364acdd2b7431f9d4cccc6a028855c0937abe9ee4ce.scope: Deactivated successfully.
Dec  1 04:47:24 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 04:47:24 np0005540825 ceph-mgr[74709]: [cephadm INFO root] Saving service mgr spec with placement count:2
Dec  1 04:47:24 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Dec  1 04:47:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec  1 04:47:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:24 np0005540825 fervent_lumiere[76256]: Scheduled mgr update...
Dec  1 04:47:24 np0005540825 systemd[1]: libpod-39a4085ef7f9a52fb9bbd473f02c1e84c322d7c88d9ae24b04007c9eef7cd26a.scope: Deactivated successfully.
Dec  1 04:47:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Dec  1 04:47:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:24 np0005540825 podman[76330]: 2025-12-01 09:47:24.747992846 +0000 UTC m=+0.027598973 container died 39a4085ef7f9a52fb9bbd473f02c1e84c322d7c88d9ae24b04007c9eef7cd26a (image=quay.io/ceph/ceph:v19, name=fervent_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  1 04:47:24 np0005540825 systemd[1]: var-lib-containers-storage-overlay-f8be39d9901c5e502282bf31c0c00a846e9c7f1dea8b74baeafe7c13a83b8de5-merged.mount: Deactivated successfully.
Dec  1 04:47:24 np0005540825 podman[76330]: 2025-12-01 09:47:24.793998662 +0000 UTC m=+0.073604729 container remove 39a4085ef7f9a52fb9bbd473f02c1e84c322d7c88d9ae24b04007c9eef7cd26a (image=quay.io/ceph/ceph:v19, name=fervent_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  1 04:47:24 np0005540825 systemd[1]: libpod-conmon-39a4085ef7f9a52fb9bbd473f02c1e84c322d7c88d9ae24b04007c9eef7cd26a.scope: Deactivated successfully.
Dec  1 04:47:24 np0005540825 podman[76367]: 2025-12-01 09:47:24.884087536 +0000 UTC m=+0.055198015 container create 72aab16e02b20b39889519ac24e067ab4b90a270ecb76fe9ede406fda472f2f3 (image=quay.io/ceph/ceph:v19, name=strange_haslett, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:47:24 np0005540825 systemd[1]: Started libpod-conmon-72aab16e02b20b39889519ac24e067ab4b90a270ecb76fe9ede406fda472f2f3.scope.
Dec  1 04:47:24 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:47:24 np0005540825 podman[76367]: 2025-12-01 09:47:24.863608158 +0000 UTC m=+0.034718567 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:47:24 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21216c4685cb69a201f38da08461a902cd735f64e14cdbc5debbf53b9cb68e7c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:24 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21216c4685cb69a201f38da08461a902cd735f64e14cdbc5debbf53b9cb68e7c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:24 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21216c4685cb69a201f38da08461a902cd735f64e14cdbc5debbf53b9cb68e7c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:24 np0005540825 podman[76367]: 2025-12-01 09:47:24.977567696 +0000 UTC m=+0.148678165 container init 72aab16e02b20b39889519ac24e067ab4b90a270ecb76fe9ede406fda472f2f3 (image=quay.io/ceph/ceph:v19, name=strange_haslett, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec  1 04:47:24 np0005540825 podman[76367]: 2025-12-01 09:47:24.988588851 +0000 UTC m=+0.159699250 container start 72aab16e02b20b39889519ac24e067ab4b90a270ecb76fe9ede406fda472f2f3 (image=quay.io/ceph/ceph:v19, name=strange_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 04:47:25 np0005540825 podman[76367]: 2025-12-01 09:47:25.003335391 +0000 UTC m=+0.174445820 container attach 72aab16e02b20b39889519ac24e067ab4b90a270ecb76fe9ede406fda472f2f3 (image=quay.io/ceph/ceph:v19, name=strange_haslett, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec  1 04:47:25 np0005540825 ceph-mon[74416]: Saving service mon spec with placement count:5
Dec  1 04:47:25 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:25 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:25 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:47:25 np0005540825 ceph-mgr[74709]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  1 04:47:25 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:25 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 04:47:25 np0005540825 ceph-mgr[74709]: [cephadm INFO root] Saving service crash spec with placement *
Dec  1 04:47:25 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Dec  1 04:47:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec  1 04:47:25 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:25 np0005540825 strange_haslett[76411]: Scheduled crash update...
Dec  1 04:47:25 np0005540825 systemd[1]: libpod-72aab16e02b20b39889519ac24e067ab4b90a270ecb76fe9ede406fda472f2f3.scope: Deactivated successfully.
Dec  1 04:47:25 np0005540825 podman[76367]: 2025-12-01 09:47:25.365273255 +0000 UTC m=+0.536383634 container died 72aab16e02b20b39889519ac24e067ab4b90a270ecb76fe9ede406fda472f2f3 (image=quay.io/ceph/ceph:v19, name=strange_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec  1 04:47:25 np0005540825 systemd[1]: var-lib-containers-storage-overlay-21216c4685cb69a201f38da08461a902cd735f64e14cdbc5debbf53b9cb68e7c-merged.mount: Deactivated successfully.
Dec  1 04:47:25 np0005540825 podman[76367]: 2025-12-01 09:47:25.409805203 +0000 UTC m=+0.580915622 container remove 72aab16e02b20b39889519ac24e067ab4b90a270ecb76fe9ede406fda472f2f3 (image=quay.io/ceph/ceph:v19, name=strange_haslett, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:47:25 np0005540825 systemd[1]: libpod-conmon-72aab16e02b20b39889519ac24e067ab4b90a270ecb76fe9ede406fda472f2f3.scope: Deactivated successfully.
Dec  1 04:47:25 np0005540825 podman[76518]: 2025-12-01 09:47:25.484603542 +0000 UTC m=+0.046897820 container create 43f677bee0aa1c1be63c071bde3a94a8be4e0e0e9cbc1410397d2b0612f82797 (image=quay.io/ceph/ceph:v19, name=zen_shockley, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  1 04:47:25 np0005540825 systemd[1]: Started libpod-conmon-43f677bee0aa1c1be63c071bde3a94a8be4e0e0e9cbc1410397d2b0612f82797.scope.
Dec  1 04:47:25 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:47:25 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4ab7c48a11a5c6536d93729c26361ee9c91e9a18f41673fb37683d1bc78b6ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:25 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4ab7c48a11a5c6536d93729c26361ee9c91e9a18f41673fb37683d1bc78b6ac/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:25 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4ab7c48a11a5c6536d93729c26361ee9c91e9a18f41673fb37683d1bc78b6ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:25 np0005540825 podman[76518]: 2025-12-01 09:47:25.464255677 +0000 UTC m=+0.026550005 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:47:25 np0005540825 podman[76518]: 2025-12-01 09:47:25.593048889 +0000 UTC m=+0.155343197 container init 43f677bee0aa1c1be63c071bde3a94a8be4e0e0e9cbc1410397d2b0612f82797 (image=quay.io/ceph/ceph:v19, name=zen_shockley, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec  1 04:47:25 np0005540825 podman[76518]: 2025-12-01 09:47:25.604059953 +0000 UTC m=+0.166354261 container start 43f677bee0aa1c1be63c071bde3a94a8be4e0e0e9cbc1410397d2b0612f82797 (image=quay.io/ceph/ceph:v19, name=zen_shockley, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:47:25 np0005540825 podman[76518]: 2025-12-01 09:47:25.608538308 +0000 UTC m=+0.170832586 container attach 43f677bee0aa1c1be63c071bde3a94a8be4e0e0e9cbc1410397d2b0612f82797 (image=quay.io/ceph/ceph:v19, name=zen_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  1 04:47:25 np0005540825 podman[76629]: 2025-12-01 09:47:25.922201347 +0000 UTC m=+0.081142843 container exec 04e54403a63b389bbbec1024baf07fe083822adf8debfd9c927a52a06f70b8a1 (image=quay.io/ceph/ceph:v19, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mon-compute-0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  1 04:47:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Dec  1 04:47:25 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4141945833' entity='client.admin' 
Dec  1 04:47:26 np0005540825 systemd[1]: libpod-43f677bee0aa1c1be63c071bde3a94a8be4e0e0e9cbc1410397d2b0612f82797.scope: Deactivated successfully.
Dec  1 04:47:26 np0005540825 podman[76518]: 2025-12-01 09:47:26.011225673 +0000 UTC m=+0.573519971 container died 43f677bee0aa1c1be63c071bde3a94a8be4e0e0e9cbc1410397d2b0612f82797 (image=quay.io/ceph/ceph:v19, name=zen_shockley, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Dec  1 04:47:26 np0005540825 systemd[1]: var-lib-containers-storage-overlay-e4ab7c48a11a5c6536d93729c26361ee9c91e9a18f41673fb37683d1bc78b6ac-merged.mount: Deactivated successfully.
Dec  1 04:47:26 np0005540825 podman[76518]: 2025-12-01 09:47:26.061603802 +0000 UTC m=+0.623898100 container remove 43f677bee0aa1c1be63c071bde3a94a8be4e0e0e9cbc1410397d2b0612f82797 (image=quay.io/ceph/ceph:v19, name=zen_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:47:26 np0005540825 podman[76629]: 2025-12-01 09:47:26.066588251 +0000 UTC m=+0.225529707 container exec_died 04e54403a63b389bbbec1024baf07fe083822adf8debfd9c927a52a06f70b8a1 (image=quay.io/ceph/ceph:v19, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mon-compute-0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Dec  1 04:47:26 np0005540825 systemd[1]: libpod-conmon-43f677bee0aa1c1be63c071bde3a94a8be4e0e0e9cbc1410397d2b0612f82797.scope: Deactivated successfully.
Dec  1 04:47:26 np0005540825 podman[76674]: 2025-12-01 09:47:26.137194601 +0000 UTC m=+0.050692878 container create 34649ca33819fc42837114b220b16bacac919a90eefa6db807cb1c305d51577e (image=quay.io/ceph/ceph:v19, name=frosty_hamilton, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:47:26 np0005540825 systemd[1]: Started libpod-conmon-34649ca33819fc42837114b220b16bacac919a90eefa6db807cb1c305d51577e.scope.
Dec  1 04:47:26 np0005540825 podman[76674]: 2025-12-01 09:47:26.110250897 +0000 UTC m=+0.023749224 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:47:26 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:47:26 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3595b4425131ae5b42a39dae68a8a0860e879c43b8218c2fcd8e31bf61b1653a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:26 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3595b4425131ae5b42a39dae68a8a0860e879c43b8218c2fcd8e31bf61b1653a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:26 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3595b4425131ae5b42a39dae68a8a0860e879c43b8218c2fcd8e31bf61b1653a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:26 np0005540825 ceph-mon[74416]: Saving service mgr spec with placement count:2
Dec  1 04:47:26 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:26 np0005540825 ceph-mon[74416]: Saving service crash spec with placement *
Dec  1 04:47:26 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:26 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/4141945833' entity='client.admin' 
Dec  1 04:47:26 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:47:26 np0005540825 podman[76674]: 2025-12-01 09:47:26.239811788 +0000 UTC m=+0.153310045 container init 34649ca33819fc42837114b220b16bacac919a90eefa6db807cb1c305d51577e (image=quay.io/ceph/ceph:v19, name=frosty_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:47:26 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:26 np0005540825 podman[76674]: 2025-12-01 09:47:26.250792661 +0000 UTC m=+0.164290908 container start 34649ca33819fc42837114b220b16bacac919a90eefa6db807cb1c305d51577e (image=quay.io/ceph/ceph:v19, name=frosty_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:47:26 np0005540825 podman[76674]: 2025-12-01 09:47:26.254710852 +0000 UTC m=+0.168209099 container attach 34649ca33819fc42837114b220b16bacac919a90eefa6db807cb1c305d51577e (image=quay.io/ceph/ceph:v19, name=frosty_hamilton, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:47:26 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 04:47:26 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Dec  1 04:47:26 np0005540825 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 76800 (sysctl)
Dec  1 04:47:26 np0005540825 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Dec  1 04:47:26 np0005540825 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Dec  1 04:47:26 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:26 np0005540825 podman[76674]: 2025-12-01 09:47:26.795467537 +0000 UTC m=+0.708965804 container died 34649ca33819fc42837114b220b16bacac919a90eefa6db807cb1c305d51577e (image=quay.io/ceph/ceph:v19, name=frosty_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  1 04:47:26 np0005540825 systemd[1]: libpod-34649ca33819fc42837114b220b16bacac919a90eefa6db807cb1c305d51577e.scope: Deactivated successfully.
Dec  1 04:47:26 np0005540825 systemd[1]: var-lib-containers-storage-overlay-3595b4425131ae5b42a39dae68a8a0860e879c43b8218c2fcd8e31bf61b1653a-merged.mount: Deactivated successfully.
Dec  1 04:47:26 np0005540825 podman[76674]: 2025-12-01 09:47:26.836601118 +0000 UTC m=+0.750099395 container remove 34649ca33819fc42837114b220b16bacac919a90eefa6db807cb1c305d51577e (image=quay.io/ceph/ceph:v19, name=frosty_hamilton, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True)
Dec  1 04:47:26 np0005540825 systemd[1]: libpod-conmon-34649ca33819fc42837114b220b16bacac919a90eefa6db807cb1c305d51577e.scope: Deactivated successfully.
Dec  1 04:47:26 np0005540825 podman[76819]: 2025-12-01 09:47:26.927861401 +0000 UTC m=+0.058784856 container create a63dd984b550a5a2a6badab3afea4bcaeea0cee3136d61f51b791498bae4f02f (image=quay.io/ceph/ceph:v19, name=nice_joliot, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  1 04:47:26 np0005540825 systemd[1]: Started libpod-conmon-a63dd984b550a5a2a6badab3afea4bcaeea0cee3136d61f51b791498bae4f02f.scope.
Dec  1 04:47:26 np0005540825 podman[76819]: 2025-12-01 09:47:26.903542794 +0000 UTC m=+0.034466249 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:47:27 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:47:27 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c9e247445d093193159fa5586f741658b4b149a625379683b36a3af835398cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:27 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c9e247445d093193159fa5586f741658b4b149a625379683b36a3af835398cd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:27 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c9e247445d093193159fa5586f741658b4b149a625379683b36a3af835398cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:27 np0005540825 podman[76819]: 2025-12-01 09:47:27.032572772 +0000 UTC m=+0.163496227 container init a63dd984b550a5a2a6badab3afea4bcaeea0cee3136d61f51b791498bae4f02f (image=quay.io/ceph/ceph:v19, name=nice_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec  1 04:47:27 np0005540825 podman[76819]: 2025-12-01 09:47:27.045569477 +0000 UTC m=+0.176492902 container start a63dd984b550a5a2a6badab3afea4bcaeea0cee3136d61f51b791498bae4f02f (image=quay.io/ceph/ceph:v19, name=nice_joliot, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec  1 04:47:27 np0005540825 podman[76819]: 2025-12-01 09:47:27.049785986 +0000 UTC m=+0.180709481 container attach a63dd984b550a5a2a6badab3afea4bcaeea0cee3136d61f51b791498bae4f02f (image=quay.io/ceph/ceph:v19, name=nice_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:47:27 np0005540825 ceph-mgr[74709]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  1 04:47:27 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:27 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:27 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 04:47:27 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  1 04:47:27 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:27 np0005540825 ceph-mgr[74709]: [cephadm INFO root] Added label _admin to host compute-0
Dec  1 04:47:27 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Dec  1 04:47:27 np0005540825 nice_joliot[76839]: Added label _admin to host compute-0
Dec  1 04:47:27 np0005540825 systemd[1]: libpod-a63dd984b550a5a2a6badab3afea4bcaeea0cee3136d61f51b791498bae4f02f.scope: Deactivated successfully.
Dec  1 04:47:27 np0005540825 podman[76819]: 2025-12-01 09:47:27.462792037 +0000 UTC m=+0.593715512 container died a63dd984b550a5a2a6badab3afea4bcaeea0cee3136d61f51b791498bae4f02f (image=quay.io/ceph/ceph:v19, name=nice_joliot, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:47:27 np0005540825 systemd[1]: var-lib-containers-storage-overlay-6c9e247445d093193159fa5586f741658b4b149a625379683b36a3af835398cd-merged.mount: Deactivated successfully.
Dec  1 04:47:27 np0005540825 podman[76819]: 2025-12-01 09:47:27.510365133 +0000 UTC m=+0.641288568 container remove a63dd984b550a5a2a6badab3afea4bcaeea0cee3136d61f51b791498bae4f02f (image=quay.io/ceph/ceph:v19, name=nice_joliot, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:47:27 np0005540825 systemd[1]: libpod-conmon-a63dd984b550a5a2a6badab3afea4bcaeea0cee3136d61f51b791498bae4f02f.scope: Deactivated successfully.
Dec  1 04:47:27 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:47:27 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:27 np0005540825 podman[76957]: 2025-12-01 09:47:27.576094467 +0000 UTC m=+0.043399499 container create 25304a2619df8e90899bad86f0f342b863313d3ee3b5a04873602ac48837ccc2 (image=quay.io/ceph/ceph:v19, name=frosty_cray, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:47:27 np0005540825 systemd[1]: Started libpod-conmon-25304a2619df8e90899bad86f0f342b863313d3ee3b5a04873602ac48837ccc2.scope.
Dec  1 04:47:27 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:47:27 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a10bc3bcf4ae6a63e659520d981ee3e7a3c272b1512ec857cdf0f80e74940bda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:27 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a10bc3bcf4ae6a63e659520d981ee3e7a3c272b1512ec857cdf0f80e74940bda/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:27 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a10bc3bcf4ae6a63e659520d981ee3e7a3c272b1512ec857cdf0f80e74940bda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:27 np0005540825 podman[76957]: 2025-12-01 09:47:27.558904654 +0000 UTC m=+0.026209676 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:47:27 np0005540825 podman[76957]: 2025-12-01 09:47:27.662797963 +0000 UTC m=+0.130103015 container init 25304a2619df8e90899bad86f0f342b863313d3ee3b5a04873602ac48837ccc2 (image=quay.io/ceph/ceph:v19, name=frosty_cray, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec  1 04:47:27 np0005540825 podman[76957]: 2025-12-01 09:47:27.673100319 +0000 UTC m=+0.140405371 container start 25304a2619df8e90899bad86f0f342b863313d3ee3b5a04873602ac48837ccc2 (image=quay.io/ceph/ceph:v19, name=frosty_cray, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  1 04:47:27 np0005540825 podman[76957]: 2025-12-01 09:47:27.677163684 +0000 UTC m=+0.144468736 container attach 25304a2619df8e90899bad86f0f342b863313d3ee3b5a04873602ac48837ccc2 (image=quay.io/ceph/ceph:v19, name=frosty_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  1 04:47:27 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:47:28 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Dec  1 04:47:28 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3275012895' entity='client.admin' 
Dec  1 04:47:28 np0005540825 frosty_cray[76997]: set mgr/dashboard/cluster/status
Dec  1 04:47:28 np0005540825 podman[77088]: 2025-12-01 09:47:28.180992457 +0000 UTC m=+0.076576746 container create 5d15fecb86cafa968635d8a5da0762272a18cfe30862be6b2df3789829b525cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:47:28 np0005540825 systemd[1]: libpod-25304a2619df8e90899bad86f0f342b863313d3ee3b5a04873602ac48837ccc2.scope: Deactivated successfully.
Dec  1 04:47:28 np0005540825 podman[76957]: 2025-12-01 09:47:28.193292864 +0000 UTC m=+0.660597886 container died 25304a2619df8e90899bad86f0f342b863313d3ee3b5a04873602ac48837ccc2 (image=quay.io/ceph/ceph:v19, name=frosty_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  1 04:47:28 np0005540825 podman[77088]: 2025-12-01 09:47:28.149149446 +0000 UTC m=+0.044733775 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:47:28 np0005540825 systemd[1]: Started libpod-conmon-5d15fecb86cafa968635d8a5da0762272a18cfe30862be6b2df3789829b525cc.scope.
Dec  1 04:47:28 np0005540825 systemd[1]: var-lib-containers-storage-overlay-a10bc3bcf4ae6a63e659520d981ee3e7a3c272b1512ec857cdf0f80e74940bda-merged.mount: Deactivated successfully.
Dec  1 04:47:28 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:28 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:28 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/3275012895' entity='client.admin' 
Dec  1 04:47:28 np0005540825 podman[76957]: 2025-12-01 09:47:28.265612709 +0000 UTC m=+0.732917761 container remove 25304a2619df8e90899bad86f0f342b863313d3ee3b5a04873602ac48837ccc2 (image=quay.io/ceph/ceph:v19, name=frosty_cray, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec  1 04:47:28 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:47:28 np0005540825 systemd[1]: libpod-conmon-25304a2619df8e90899bad86f0f342b863313d3ee3b5a04873602ac48837ccc2.scope: Deactivated successfully.
Dec  1 04:47:28 np0005540825 podman[77088]: 2025-12-01 09:47:28.297353558 +0000 UTC m=+0.192937797 container init 5d15fecb86cafa968635d8a5da0762272a18cfe30862be6b2df3789829b525cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  1 04:47:28 np0005540825 podman[77088]: 2025-12-01 09:47:28.304896712 +0000 UTC m=+0.200480971 container start 5d15fecb86cafa968635d8a5da0762272a18cfe30862be6b2df3789829b525cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_rosalind, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:47:28 np0005540825 podman[77088]: 2025-12-01 09:47:28.308744791 +0000 UTC m=+0.204329030 container attach 5d15fecb86cafa968635d8a5da0762272a18cfe30862be6b2df3789829b525cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid)
Dec  1 04:47:28 np0005540825 condescending_rosalind[77121]: 167 167
Dec  1 04:47:28 np0005540825 systemd[1]: libpod-5d15fecb86cafa968635d8a5da0762272a18cfe30862be6b2df3789829b525cc.scope: Deactivated successfully.
Dec  1 04:47:28 np0005540825 podman[77088]: 2025-12-01 09:47:28.312766155 +0000 UTC m=+0.208350434 container died 5d15fecb86cafa968635d8a5da0762272a18cfe30862be6b2df3789829b525cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_rosalind, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  1 04:47:28 np0005540825 systemd[1]: var-lib-containers-storage-overlay-0db04e89759ed9cc0bfae17503785e2df7c937a39175479e833c0b5a2c51d564-merged.mount: Deactivated successfully.
Dec  1 04:47:28 np0005540825 podman[77088]: 2025-12-01 09:47:28.361063441 +0000 UTC m=+0.256647670 container remove 5d15fecb86cafa968635d8a5da0762272a18cfe30862be6b2df3789829b525cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_rosalind, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  1 04:47:28 np0005540825 systemd[1]: libpod-conmon-5d15fecb86cafa968635d8a5da0762272a18cfe30862be6b2df3789829b525cc.scope: Deactivated successfully.
Dec  1 04:47:28 np0005540825 podman[77145]: 2025-12-01 09:47:28.542218462 +0000 UTC m=+0.054488586 container create 08ccee2f07db91713f2f94c3dd9a042e2bf705224effafd18dc562bb297b323b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:47:28 np0005540825 systemd[1]: Started libpod-conmon-08ccee2f07db91713f2f94c3dd9a042e2bf705224effafd18dc562bb297b323b.scope.
Dec  1 04:47:28 np0005540825 podman[77145]: 2025-12-01 09:47:28.515875843 +0000 UTC m=+0.028146027 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:47:28 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:47:28 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f095f13e1bb4e35c551d33461f524071f5384d2e0a79f610b24a3633aac0b862/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:28 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f095f13e1bb4e35c551d33461f524071f5384d2e0a79f610b24a3633aac0b862/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:28 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f095f13e1bb4e35c551d33461f524071f5384d2e0a79f610b24a3633aac0b862/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:28 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f095f13e1bb4e35c551d33461f524071f5384d2e0a79f610b24a3633aac0b862/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:28 np0005540825 podman[77145]: 2025-12-01 09:47:28.639523032 +0000 UTC m=+0.151793246 container init 08ccee2f07db91713f2f94c3dd9a042e2bf705224effafd18dc562bb297b323b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_dubinsky, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 04:47:28 np0005540825 podman[77145]: 2025-12-01 09:47:28.657225958 +0000 UTC m=+0.169496092 container start 08ccee2f07db91713f2f94c3dd9a042e2bf705224effafd18dc562bb297b323b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec  1 04:47:28 np0005540825 podman[77145]: 2025-12-01 09:47:28.665340067 +0000 UTC m=+0.177610191 container attach 08ccee2f07db91713f2f94c3dd9a042e2bf705224effafd18dc562bb297b323b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  1 04:47:28 np0005540825 python3[77192]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:47:28 np0005540825 podman[77195]: 2025-12-01 09:47:28.985924965 +0000 UTC m=+0.072650565 container create b540cb05ac9a9b59c16e8872acc3ce852ed3d5fb61d9c1e2d9cba0159bac0f0d (image=quay.io/ceph/ceph:v19, name=compassionate_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec  1 04:47:29 np0005540825 systemd[1]: Started libpod-conmon-b540cb05ac9a9b59c16e8872acc3ce852ed3d5fb61d9c1e2d9cba0159bac0f0d.scope.
Dec  1 04:47:29 np0005540825 podman[77195]: 2025-12-01 09:47:28.955195532 +0000 UTC m=+0.041921182 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:47:29 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:47:29 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71e5e6e8c62040ff62420fe72ee6d31d645ca3611881308ad677e4ee3496bc05/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:29 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71e5e6e8c62040ff62420fe72ee6d31d645ca3611881308ad677e4ee3496bc05/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:29 np0005540825 podman[77195]: 2025-12-01 09:47:29.091570459 +0000 UTC m=+0.178296039 container init b540cb05ac9a9b59c16e8872acc3ce852ed3d5fb61d9c1e2d9cba0159bac0f0d (image=quay.io/ceph/ceph:v19, name=compassionate_shannon, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec  1 04:47:29 np0005540825 podman[77195]: 2025-12-01 09:47:29.100988062 +0000 UTC m=+0.187713662 container start b540cb05ac9a9b59c16e8872acc3ce852ed3d5fb61d9c1e2d9cba0159bac0f0d (image=quay.io/ceph/ceph:v19, name=compassionate_shannon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  1 04:47:29 np0005540825 podman[77195]: 2025-12-01 09:47:29.10517041 +0000 UTC m=+0.191895990 container attach b540cb05ac9a9b59c16e8872acc3ce852ed3d5fb61d9c1e2d9cba0159bac0f0d (image=quay.io/ceph/ceph:v19, name=compassionate_shannon, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  1 04:47:29 np0005540825 ceph-mgr[74709]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  1 04:47:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Dec  1 04:47:29 np0005540825 epic_dubinsky[77162]: [
Dec  1 04:47:29 np0005540825 epic_dubinsky[77162]:    {
Dec  1 04:47:29 np0005540825 epic_dubinsky[77162]:        "available": false,
Dec  1 04:47:29 np0005540825 epic_dubinsky[77162]:        "being_replaced": false,
Dec  1 04:47:29 np0005540825 epic_dubinsky[77162]:        "ceph_device_lvm": false,
Dec  1 04:47:29 np0005540825 epic_dubinsky[77162]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec  1 04:47:29 np0005540825 epic_dubinsky[77162]:        "lsm_data": {},
Dec  1 04:47:29 np0005540825 epic_dubinsky[77162]:        "lvs": [],
Dec  1 04:47:29 np0005540825 epic_dubinsky[77162]:        "path": "/dev/sr0",
Dec  1 04:47:29 np0005540825 epic_dubinsky[77162]:        "rejected_reasons": [
Dec  1 04:47:29 np0005540825 epic_dubinsky[77162]:            "Insufficient space (<5GB)",
Dec  1 04:47:29 np0005540825 epic_dubinsky[77162]:            "Has a FileSystem"
Dec  1 04:47:29 np0005540825 epic_dubinsky[77162]:        ],
Dec  1 04:47:29 np0005540825 epic_dubinsky[77162]:        "sys_api": {
Dec  1 04:47:29 np0005540825 epic_dubinsky[77162]:            "actuators": null,
Dec  1 04:47:29 np0005540825 epic_dubinsky[77162]:            "device_nodes": [
Dec  1 04:47:29 np0005540825 epic_dubinsky[77162]:                "sr0"
Dec  1 04:47:29 np0005540825 epic_dubinsky[77162]:            ],
Dec  1 04:47:29 np0005540825 epic_dubinsky[77162]:            "devname": "sr0",
Dec  1 04:47:29 np0005540825 epic_dubinsky[77162]:            "human_readable_size": "482.00 KB",
Dec  1 04:47:29 np0005540825 epic_dubinsky[77162]:            "id_bus": "ata",
Dec  1 04:47:29 np0005540825 epic_dubinsky[77162]:            "model": "QEMU DVD-ROM",
Dec  1 04:47:29 np0005540825 epic_dubinsky[77162]:            "nr_requests": "2",
Dec  1 04:47:29 np0005540825 epic_dubinsky[77162]:            "parent": "/dev/sr0",
Dec  1 04:47:29 np0005540825 epic_dubinsky[77162]:            "partitions": {},
Dec  1 04:47:29 np0005540825 epic_dubinsky[77162]:            "path": "/dev/sr0",
Dec  1 04:47:29 np0005540825 epic_dubinsky[77162]:            "removable": "1",
Dec  1 04:47:29 np0005540825 epic_dubinsky[77162]:            "rev": "2.5+",
Dec  1 04:47:29 np0005540825 epic_dubinsky[77162]:            "ro": "0",
Dec  1 04:47:29 np0005540825 epic_dubinsky[77162]:            "rotational": "1",
Dec  1 04:47:29 np0005540825 epic_dubinsky[77162]:            "sas_address": "",
Dec  1 04:47:29 np0005540825 epic_dubinsky[77162]:            "sas_device_handle": "",
Dec  1 04:47:29 np0005540825 epic_dubinsky[77162]:            "scheduler_mode": "mq-deadline",
Dec  1 04:47:29 np0005540825 epic_dubinsky[77162]:            "sectors": 0,
Dec  1 04:47:29 np0005540825 epic_dubinsky[77162]:            "sectorsize": "2048",
Dec  1 04:47:29 np0005540825 epic_dubinsky[77162]:            "size": 493568.0,
Dec  1 04:47:29 np0005540825 epic_dubinsky[77162]:            "support_discard": "2048",
Dec  1 04:47:29 np0005540825 epic_dubinsky[77162]:            "type": "disk",
Dec  1 04:47:29 np0005540825 epic_dubinsky[77162]:            "vendor": "QEMU"
Dec  1 04:47:29 np0005540825 epic_dubinsky[77162]:        }
Dec  1 04:47:29 np0005540825 epic_dubinsky[77162]:    }
Dec  1 04:47:29 np0005540825 epic_dubinsky[77162]: ]
Dec  1 04:47:29 np0005540825 systemd[1]: libpod-08ccee2f07db91713f2f94c3dd9a042e2bf705224effafd18dc562bb297b323b.scope: Deactivated successfully.
Dec  1 04:47:29 np0005540825 conmon[77162]: conmon 08ccee2f07db91713f2f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-08ccee2f07db91713f2f94c3dd9a042e2bf705224effafd18dc562bb297b323b.scope/container/memory.events
Dec  1 04:47:29 np0005540825 podman[77145]: 2025-12-01 09:47:29.512158316 +0000 UTC m=+1.024428450 container died 08ccee2f07db91713f2f94c3dd9a042e2bf705224effafd18dc562bb297b323b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_dubinsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:47:29 np0005540825 ceph-mon[74416]: Added label _admin to host compute-0
Dec  1 04:47:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1525471845' entity='client.admin' 
Dec  1 04:47:29 np0005540825 systemd[1]: var-lib-containers-storage-overlay-f095f13e1bb4e35c551d33461f524071f5384d2e0a79f610b24a3633aac0b862-merged.mount: Deactivated successfully.
Dec  1 04:47:29 np0005540825 systemd[1]: libpod-b540cb05ac9a9b59c16e8872acc3ce852ed3d5fb61d9c1e2d9cba0159bac0f0d.scope: Deactivated successfully.
Dec  1 04:47:29 np0005540825 podman[77145]: 2025-12-01 09:47:29.578289761 +0000 UTC m=+1.090559875 container remove 08ccee2f07db91713f2f94c3dd9a042e2bf705224effafd18dc562bb297b323b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid)
Dec  1 04:47:29 np0005540825 podman[77195]: 2025-12-01 09:47:29.581479103 +0000 UTC m=+0.668204693 container died b540cb05ac9a9b59c16e8872acc3ce852ed3d5fb61d9c1e2d9cba0159bac0f0d (image=quay.io/ceph/ceph:v19, name=compassionate_shannon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:47:29 np0005540825 systemd[1]: libpod-conmon-08ccee2f07db91713f2f94c3dd9a042e2bf705224effafd18dc562bb297b323b.scope: Deactivated successfully.
Dec  1 04:47:29 np0005540825 systemd[1]: var-lib-containers-storage-overlay-71e5e6e8c62040ff62420fe72ee6d31d645ca3611881308ad677e4ee3496bc05-merged.mount: Deactivated successfully.
Dec  1 04:47:29 np0005540825 podman[77195]: 2025-12-01 09:47:29.623129927 +0000 UTC m=+0.709855477 container remove b540cb05ac9a9b59c16e8872acc3ce852ed3d5fb61d9c1e2d9cba0159bac0f0d (image=quay.io/ceph/ceph:v19, name=compassionate_shannon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:47:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:47:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:47:29 np0005540825 systemd[1]: libpod-conmon-b540cb05ac9a9b59c16e8872acc3ce852ed3d5fb61d9c1e2d9cba0159bac0f0d.scope: Deactivated successfully.
Dec  1 04:47:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:47:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:47:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec  1 04:47:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  1 04:47:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:47:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:47:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 04:47:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 04:47:29 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec  1 04:47:29 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec  1 04:47:30 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.conf
Dec  1 04:47:30 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.conf
Dec  1 04:47:30 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/1525471845' entity='client.admin' 
Dec  1 04:47:30 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:30 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:30 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:30 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:30 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  1 04:47:30 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 04:47:30 np0005540825 ceph-mon[74416]: Updating compute-0:/etc/ceph/ceph.conf
Dec  1 04:47:30 np0005540825 ceph-mon[74416]: Updating compute-0:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.conf
Dec  1 04:47:30 np0005540825 ansible-async_wrapper.py[78774]: Invoked with j694736297860 30 /home/zuul/.ansible/tmp/ansible-tmp-1764582449.9689026-37141-187048758280904/AnsiballZ_command.py _
Dec  1 04:47:30 np0005540825 ansible-async_wrapper.py[78825]: Starting module and watcher
Dec  1 04:47:30 np0005540825 ansible-async_wrapper.py[78825]: Start watching 78826 (30)
Dec  1 04:47:30 np0005540825 ansible-async_wrapper.py[78826]: Start module (78826)
Dec  1 04:47:30 np0005540825 ansible-async_wrapper.py[78774]: Return async_wrapper task started.
Dec  1 04:47:30 np0005540825 python3[78828]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:47:30 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  1 04:47:30 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  1 04:47:30 np0005540825 podman[78873]: 2025-12-01 09:47:30.825267749 +0000 UTC m=+0.071288980 container create a4d22e8b389fcb8400d7383357a3b21199251b6dee7f967bccab1992448ef59b (image=quay.io/ceph/ceph:v19, name=practical_lalande, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:47:30 np0005540825 systemd[1]: Started libpod-conmon-a4d22e8b389fcb8400d7383357a3b21199251b6dee7f967bccab1992448ef59b.scope.
Dec  1 04:47:30 np0005540825 podman[78873]: 2025-12-01 09:47:30.791851397 +0000 UTC m=+0.037872678 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:47:30 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:47:30 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9633a872c89aa43519559c7579ade50e4a80f9dbd8bb5f5ee6607a1304dde92/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:30 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9633a872c89aa43519559c7579ade50e4a80f9dbd8bb5f5ee6607a1304dde92/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:30 np0005540825 podman[78873]: 2025-12-01 09:47:30.920843263 +0000 UTC m=+0.166864554 container init a4d22e8b389fcb8400d7383357a3b21199251b6dee7f967bccab1992448ef59b (image=quay.io/ceph/ceph:v19, name=practical_lalande, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec  1 04:47:30 np0005540825 podman[78873]: 2025-12-01 09:47:30.928024639 +0000 UTC m=+0.174045840 container start a4d22e8b389fcb8400d7383357a3b21199251b6dee7f967bccab1992448ef59b (image=quay.io/ceph/ceph:v19, name=practical_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Dec  1 04:47:30 np0005540825 podman[78873]: 2025-12-01 09:47:30.994869092 +0000 UTC m=+0.240890313 container attach a4d22e8b389fcb8400d7383357a3b21199251b6dee7f967bccab1992448ef59b (image=quay.io/ceph/ceph:v19, name=practical_lalande, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:47:31 np0005540825 ceph-mgr[74709]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Dec  1 04:47:31 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  1 04:47:31 np0005540825 ceph-mon[74416]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec  1 04:47:31 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  1 04:47:31 np0005540825 practical_lalande[78919]: 
Dec  1 04:47:31 np0005540825 practical_lalande[78919]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  1 04:47:31 np0005540825 systemd[1]: libpod-a4d22e8b389fcb8400d7383357a3b21199251b6dee7f967bccab1992448ef59b.scope: Deactivated successfully.
Dec  1 04:47:31 np0005540825 podman[79094]: 2025-12-01 09:47:31.344508518 +0000 UTC m=+0.033276129 container died a4d22e8b389fcb8400d7383357a3b21199251b6dee7f967bccab1992448ef59b (image=quay.io/ceph/ceph:v19, name=practical_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  1 04:47:31 np0005540825 systemd[1]: var-lib-containers-storage-overlay-d9633a872c89aa43519559c7579ade50e4a80f9dbd8bb5f5ee6607a1304dde92-merged.mount: Deactivated successfully.
Dec  1 04:47:31 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.client.admin.keyring
Dec  1 04:47:31 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.client.admin.keyring
Dec  1 04:47:31 np0005540825 podman[79094]: 2025-12-01 09:47:31.445599255 +0000 UTC m=+0.134366866 container remove a4d22e8b389fcb8400d7383357a3b21199251b6dee7f967bccab1992448ef59b (image=quay.io/ceph/ceph:v19, name=practical_lalande, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:47:31 np0005540825 systemd[1]: libpod-conmon-a4d22e8b389fcb8400d7383357a3b21199251b6dee7f967bccab1992448ef59b.scope: Deactivated successfully.
Dec  1 04:47:31 np0005540825 ansible-async_wrapper.py[78826]: Module complete (78826)
Dec  1 04:47:31 np0005540825 ceph-mon[74416]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  1 04:47:31 np0005540825 ceph-mon[74416]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec  1 04:47:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:47:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:47:32 np0005540825 python3[79379]: ansible-ansible.legacy.async_status Invoked with jid=j694736297860.78774 mode=status _async_dir=/root/.ansible_async
Dec  1 04:47:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 04:47:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:32 np0005540825 ceph-mgr[74709]: [progress INFO root] update: starting ev 4c69962b-1894-41ca-85e7-0fc8d6e7edc2 (Updating crash deployment (+1 -> 1))
Dec  1 04:47:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec  1 04:47:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  1 04:47:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  1 04:47:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:47:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:47:32 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Dec  1 04:47:32 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Dec  1 04:47:32 np0005540825 python3[79528]: ansible-ansible.legacy.async_status Invoked with jid=j694736297860.78774 mode=cleanup _async_dir=/root/.ansible_async
Dec  1 04:47:32 np0005540825 podman[79570]: 2025-12-01 09:47:32.739518663 +0000 UTC m=+0.084379467 container create c6ea406608fbbc4a58b0d8b095bbe6145705eaac110d383d2936ff735ff79a87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_shannon, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:47:32 np0005540825 podman[79570]: 2025-12-01 09:47:32.683816597 +0000 UTC m=+0.028677371 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:47:32 np0005540825 systemd[1]: Started libpod-conmon-c6ea406608fbbc4a58b0d8b095bbe6145705eaac110d383d2936ff735ff79a87.scope.
Dec  1 04:47:32 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:47:32 np0005540825 podman[79570]: 2025-12-01 09:47:32.838634709 +0000 UTC m=+0.183495573 container init c6ea406608fbbc4a58b0d8b095bbe6145705eaac110d383d2936ff735ff79a87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:47:32 np0005540825 podman[79570]: 2025-12-01 09:47:32.848261908 +0000 UTC m=+0.193122682 container start c6ea406608fbbc4a58b0d8b095bbe6145705eaac110d383d2936ff735ff79a87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:47:32 np0005540825 podman[79570]: 2025-12-01 09:47:32.851548352 +0000 UTC m=+0.196409216 container attach c6ea406608fbbc4a58b0d8b095bbe6145705eaac110d383d2936ff735ff79a87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_shannon, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec  1 04:47:32 np0005540825 modest_shannon[79613]: 167 167
Dec  1 04:47:32 np0005540825 python3[79609]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  1 04:47:32 np0005540825 systemd[1]: libpod-c6ea406608fbbc4a58b0d8b095bbe6145705eaac110d383d2936ff735ff79a87.scope: Deactivated successfully.
Dec  1 04:47:32 np0005540825 podman[79570]: 2025-12-01 09:47:32.857385753 +0000 UTC m=+0.202246557 container died c6ea406608fbbc4a58b0d8b095bbe6145705eaac110d383d2936ff735ff79a87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_shannon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  1 04:47:32 np0005540825 systemd[1]: var-lib-containers-storage-overlay-61f23c8b353acb60f517bee8cc282823d1bc052b256b5e61e67089535f903363-merged.mount: Deactivated successfully.
Dec  1 04:47:32 np0005540825 podman[79570]: 2025-12-01 09:47:32.906885179 +0000 UTC m=+0.251745953 container remove c6ea406608fbbc4a58b0d8b095bbe6145705eaac110d383d2936ff735ff79a87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  1 04:47:32 np0005540825 systemd[1]: libpod-conmon-c6ea406608fbbc4a58b0d8b095bbe6145705eaac110d383d2936ff735ff79a87.scope: Deactivated successfully.
Dec  1 04:47:32 np0005540825 systemd[1]: Reloading.
Dec  1 04:47:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:47:33 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:47:33 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:47:33 np0005540825 ceph-mon[74416]: Updating compute-0:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.client.admin.keyring
Dec  1 04:47:33 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:33 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:33 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:33 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  1 04:47:33 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  1 04:47:33 np0005540825 ceph-mon[74416]: Deploying daemon crash.compute-0 on compute-0
Dec  1 04:47:33 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  1 04:47:33 np0005540825 systemd[1]: Reloading.
Dec  1 04:47:33 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:47:33 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:47:33 np0005540825 python3[79697]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:47:33 np0005540825 podman[79734]: 2025-12-01 09:47:33.50640936 +0000 UTC m=+0.052329840 container create c5ad27ffc1d8242e674a89ab37c41e86d3b09a7e1460962e67787e9287965735 (image=quay.io/ceph/ceph:v19, name=blissful_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec  1 04:47:33 np0005540825 systemd[1]: Starting Ceph crash.compute-0 for 365f19c2-81e5-5edd-b6b4-280555214d3a...
Dec  1 04:47:33 np0005540825 systemd[1]: Started libpod-conmon-c5ad27ffc1d8242e674a89ab37c41e86d3b09a7e1460962e67787e9287965735.scope.
Dec  1 04:47:33 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:47:33 np0005540825 podman[79734]: 2025-12-01 09:47:33.481386075 +0000 UTC m=+0.027306565 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:47:33 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8baf72709a4285123e6f1c69f0026fc32d2b491c93c99f58e82cd9b644ad7471/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:33 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8baf72709a4285123e6f1c69f0026fc32d2b491c93c99f58e82cd9b644ad7471/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:33 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8baf72709a4285123e6f1c69f0026fc32d2b491c93c99f58e82cd9b644ad7471/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:33 np0005540825 podman[79734]: 2025-12-01 09:47:33.593549757 +0000 UTC m=+0.139470217 container init c5ad27ffc1d8242e674a89ab37c41e86d3b09a7e1460962e67787e9287965735 (image=quay.io/ceph/ceph:v19, name=blissful_liskov, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  1 04:47:33 np0005540825 podman[79734]: 2025-12-01 09:47:33.602005275 +0000 UTC m=+0.147925715 container start c5ad27ffc1d8242e674a89ab37c41e86d3b09a7e1460962e67787e9287965735 (image=quay.io/ceph/ceph:v19, name=blissful_liskov, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec  1 04:47:33 np0005540825 podman[79734]: 2025-12-01 09:47:33.605993928 +0000 UTC m=+0.151914378 container attach c5ad27ffc1d8242e674a89ab37c41e86d3b09a7e1460962e67787e9287965735 (image=quay.io/ceph/ceph:v19, name=blissful_liskov, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  1 04:47:33 np0005540825 podman[79821]: 2025-12-01 09:47:33.813022567 +0000 UTC m=+0.052475384 container create 845bc98e981e087f54047bba58fe7ec7e04a3f819541620fa5e6fb2007ee63e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-crash-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:47:33 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6b9f94645d183e507eaaffb51575cc6291ab50b1a083f396e8a6d5584b6605e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:33 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6b9f94645d183e507eaaffb51575cc6291ab50b1a083f396e8a6d5584b6605e/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:33 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6b9f94645d183e507eaaffb51575cc6291ab50b1a083f396e8a6d5584b6605e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:33 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6b9f94645d183e507eaaffb51575cc6291ab50b1a083f396e8a6d5584b6605e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:33 np0005540825 podman[79821]: 2025-12-01 09:47:33.789814259 +0000 UTC m=+0.029267056 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:47:33 np0005540825 podman[79821]: 2025-12-01 09:47:33.912597265 +0000 UTC m=+0.152050132 container init 845bc98e981e087f54047bba58fe7ec7e04a3f819541620fa5e6fb2007ee63e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-crash-compute-0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  1 04:47:33 np0005540825 podman[79821]: 2025-12-01 09:47:33.921861244 +0000 UTC m=+0.161314051 container start 845bc98e981e087f54047bba58fe7ec7e04a3f819541620fa5e6fb2007ee63e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-crash-compute-0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:47:33 np0005540825 bash[79821]: 845bc98e981e087f54047bba58fe7ec7e04a3f819541620fa5e6fb2007ee63e8
Dec  1 04:47:33 np0005540825 systemd[1]: Started Ceph crash.compute-0 for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 04:47:33 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  1 04:47:33 np0005540825 blissful_liskov[79753]: 
Dec  1 04:47:33 np0005540825 blissful_liskov[79753]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  1 04:47:33 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:47:33 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:33 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:47:33 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:33 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec  1 04:47:33 np0005540825 systemd[1]: libpod-c5ad27ffc1d8242e674a89ab37c41e86d3b09a7e1460962e67787e9287965735.scope: Deactivated successfully.
Dec  1 04:47:33 np0005540825 podman[79734]: 2025-12-01 09:47:33.997261268 +0000 UTC m=+0.543181738 container died c5ad27ffc1d8242e674a89ab37c41e86d3b09a7e1460962e67787e9287965735 (image=quay.io/ceph/ceph:v19, name=blissful_liskov, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:47:34 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:34 np0005540825 ceph-mgr[74709]: [progress INFO root] complete: finished ev 4c69962b-1894-41ca-85e7-0fc8d6e7edc2 (Updating crash deployment (+1 -> 1))
Dec  1 04:47:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-crash-compute-0[79836]: INFO:ceph-crash:pinging cluster to exercise our key
Dec  1 04:47:34 np0005540825 ceph-mgr[74709]: [progress INFO root] Completed event 4c69962b-1894-41ca-85e7-0fc8d6e7edc2 (Updating crash deployment (+1 -> 1)) in 2 seconds
Dec  1 04:47:34 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec  1 04:47:34 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:34 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec  1 04:47:34 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:34 np0005540825 systemd[1]: var-lib-containers-storage-overlay-8baf72709a4285123e6f1c69f0026fc32d2b491c93c99f58e82cd9b644ad7471-merged.mount: Deactivated successfully.
Dec  1 04:47:34 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec  1 04:47:34 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:34 np0005540825 podman[79734]: 2025-12-01 09:47:34.043404949 +0000 UTC m=+0.589325379 container remove c5ad27ffc1d8242e674a89ab37c41e86d3b09a7e1460962e67787e9287965735 (image=quay.io/ceph/ceph:v19, name=blissful_liskov, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:47:34 np0005540825 systemd[1]: libpod-conmon-c5ad27ffc1d8242e674a89ab37c41e86d3b09a7e1460962e67787e9287965735.scope: Deactivated successfully.
Dec  1 04:47:34 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:34 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:34 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:34 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:34 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:34 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-crash-compute-0[79836]: 2025-12-01T09:47:34.155+0000 7fc4b3076640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec  1 04:47:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-crash-compute-0[79836]: 2025-12-01T09:47:34.155+0000 7fc4b3076640 -1 AuthRegistry(0x7fc4ac0698f0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec  1 04:47:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-crash-compute-0[79836]: 2025-12-01T09:47:34.157+0000 7fc4b3076640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec  1 04:47:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-crash-compute-0[79836]: 2025-12-01T09:47:34.157+0000 7fc4b3076640 -1 AuthRegistry(0x7fc4b3074ff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec  1 04:47:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-crash-compute-0[79836]: 2025-12-01T09:47:34.158+0000 7fc4b0deb640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Dec  1 04:47:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-crash-compute-0[79836]: 2025-12-01T09:47:34.158+0000 7fc4b3076640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Dec  1 04:47:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-crash-compute-0[79836]: [errno 13] RADOS permission denied (error connecting to the cluster)
Dec  1 04:47:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-crash-compute-0[79836]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Dec  1 04:47:34 np0005540825 python3[79964]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:47:34 np0005540825 podman[79985]: 2025-12-01 09:47:34.584621506 +0000 UTC m=+0.054661901 container create f8f427a0a50d6e51b25ccc5a65d9b8215b75a39f69552f9480465e9a19e634a1 (image=quay.io/ceph/ceph:v19, name=frosty_jang, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec  1 04:47:34 np0005540825 systemd[1]: Started libpod-conmon-f8f427a0a50d6e51b25ccc5a65d9b8215b75a39f69552f9480465e9a19e634a1.scope.
Dec  1 04:47:34 np0005540825 podman[79985]: 2025-12-01 09:47:34.55338583 +0000 UTC m=+0.023426215 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:47:34 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:47:34 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53c9e80c300b9b4c972d6b9d2f26ee024f59138784dccf11445cc5f1400cb86b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:34 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53c9e80c300b9b4c972d6b9d2f26ee024f59138784dccf11445cc5f1400cb86b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:34 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53c9e80c300b9b4c972d6b9d2f26ee024f59138784dccf11445cc5f1400cb86b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:34 np0005540825 podman[79985]: 2025-12-01 09:47:34.67435558 +0000 UTC m=+0.144395985 container init f8f427a0a50d6e51b25ccc5a65d9b8215b75a39f69552f9480465e9a19e634a1 (image=quay.io/ceph/ceph:v19, name=frosty_jang, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:47:34 np0005540825 podman[79985]: 2025-12-01 09:47:34.684314156 +0000 UTC m=+0.154354511 container start f8f427a0a50d6e51b25ccc5a65d9b8215b75a39f69552f9480465e9a19e634a1 (image=quay.io/ceph/ceph:v19, name=frosty_jang, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:47:34 np0005540825 podman[79985]: 2025-12-01 09:47:34.68830616 +0000 UTC m=+0.158346565 container attach f8f427a0a50d6e51b25ccc5a65d9b8215b75a39f69552f9480465e9a19e634a1 (image=quay.io/ceph/ceph:v19, name=frosty_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  1 04:47:34 np0005540825 podman[80076]: 2025-12-01 09:47:34.909428741 +0000 UTC m=+0.078350182 container exec 04e54403a63b389bbbec1024baf07fe083822adf8debfd9c927a52a06f70b8a1 (image=quay.io/ceph/ceph:v19, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mon-compute-0, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  1 04:47:35 np0005540825 podman[80076]: 2025-12-01 09:47:35.026666884 +0000 UTC m=+0.195588225 container exec_died 04e54403a63b389bbbec1024baf07fe083822adf8debfd9c927a52a06f70b8a1 (image=quay.io/ceph/ceph:v19, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mon-compute-0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  1 04:47:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Dec  1 04:47:35 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3622753350' entity='client.admin' 
Dec  1 04:47:35 np0005540825 systemd[1]: libpod-f8f427a0a50d6e51b25ccc5a65d9b8215b75a39f69552f9480465e9a19e634a1.scope: Deactivated successfully.
Dec  1 04:47:35 np0005540825 podman[79985]: 2025-12-01 09:47:35.06721475 +0000 UTC m=+0.537255125 container died f8f427a0a50d6e51b25ccc5a65d9b8215b75a39f69552f9480465e9a19e634a1 (image=quay.io/ceph/ceph:v19, name=frosty_jang, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:47:35 np0005540825 systemd[1]: var-lib-containers-storage-overlay-53c9e80c300b9b4c972d6b9d2f26ee024f59138784dccf11445cc5f1400cb86b-merged.mount: Deactivated successfully.
Dec  1 04:47:35 np0005540825 podman[79985]: 2025-12-01 09:47:35.107305264 +0000 UTC m=+0.577345629 container remove f8f427a0a50d6e51b25ccc5a65d9b8215b75a39f69552f9480465e9a19e634a1 (image=quay.io/ceph/ceph:v19, name=frosty_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:47:35 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/3622753350' entity='client.admin' 
Dec  1 04:47:35 np0005540825 systemd[1]: libpod-conmon-f8f427a0a50d6e51b25ccc5a65d9b8215b75a39f69552f9480465e9a19e634a1.scope: Deactivated successfully.
Dec  1 04:47:35 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  1 04:47:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:47:35 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:47:35 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:47:35 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:47:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 04:47:35 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 04:47:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 04:47:35 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0)
Dec  1 04:47:35 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0)
Dec  1 04:47:35 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0)
Dec  1 04:47:35 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0)
Dec  1 04:47:35 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:35 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Dec  1 04:47:35 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Dec  1 04:47:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec  1 04:47:35 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  1 04:47:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec  1 04:47:35 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec  1 04:47:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:47:35 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:47:35 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Dec  1 04:47:35 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Dec  1 04:47:35 np0005540825 python3[80187]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:47:35 np0005540825 podman[80243]: 2025-12-01 09:47:35.531426101 +0000 UTC m=+0.047581188 container create 9c50c5a8754a2211fb28a27fcb6032b128f6138c6f5ad1177ecb23f8b9946e3c (image=quay.io/ceph/ceph:v19, name=quizzical_curie, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec  1 04:47:35 np0005540825 systemd[1]: Started libpod-conmon-9c50c5a8754a2211fb28a27fcb6032b128f6138c6f5ad1177ecb23f8b9946e3c.scope.
Dec  1 04:47:35 np0005540825 podman[80243]: 2025-12-01 09:47:35.507127045 +0000 UTC m=+0.023282142 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:47:35 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:47:35 np0005540825 ansible-async_wrapper.py[78825]: Done in kid B.
Dec  1 04:47:35 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f3ca636c588941b8323c6b7f081c7618d2a84e24736f03ad8c070892469c050/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:35 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f3ca636c588941b8323c6b7f081c7618d2a84e24736f03ad8c070892469c050/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:35 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f3ca636c588941b8323c6b7f081c7618d2a84e24736f03ad8c070892469c050/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:35 np0005540825 podman[80243]: 2025-12-01 09:47:35.622530031 +0000 UTC m=+0.138685118 container init 9c50c5a8754a2211fb28a27fcb6032b128f6138c6f5ad1177ecb23f8b9946e3c (image=quay.io/ceph/ceph:v19, name=quizzical_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:47:35 np0005540825 podman[80243]: 2025-12-01 09:47:35.62755877 +0000 UTC m=+0.143713857 container start 9c50c5a8754a2211fb28a27fcb6032b128f6138c6f5ad1177ecb23f8b9946e3c (image=quay.io/ceph/ceph:v19, name=quizzical_curie, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:47:35 np0005540825 podman[80243]: 2025-12-01 09:47:35.630537997 +0000 UTC m=+0.146693084 container attach 9c50c5a8754a2211fb28a27fcb6032b128f6138c6f5ad1177ecb23f8b9946e3c (image=quay.io/ceph/ceph:v19, name=quizzical_curie, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  1 04:47:35 np0005540825 podman[80309]: 2025-12-01 09:47:35.83656012 +0000 UTC m=+0.044762295 container create f85cff2f964915adbf8910de08814b02212d30db87e56c92e303ce6fbff2f23d (image=quay.io/ceph/ceph:v19, name=youthful_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  1 04:47:35 np0005540825 podman[80309]: 2025-12-01 09:47:35.816063672 +0000 UTC m=+0.024265877 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:47:35 np0005540825 systemd[1]: Started libpod-conmon-f85cff2f964915adbf8910de08814b02212d30db87e56c92e303ce6fbff2f23d.scope.
Dec  1 04:47:35 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:47:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Dec  1 04:47:36 np0005540825 podman[80309]: 2025-12-01 09:47:36.028736646 +0000 UTC m=+0.236938861 container init f85cff2f964915adbf8910de08814b02212d30db87e56c92e303ce6fbff2f23d (image=quay.io/ceph/ceph:v19, name=youthful_khayyam, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  1 04:47:36 np0005540825 podman[80309]: 2025-12-01 09:47:36.038811326 +0000 UTC m=+0.247013531 container start f85cff2f964915adbf8910de08814b02212d30db87e56c92e303ce6fbff2f23d (image=quay.io/ceph/ceph:v19, name=youthful_khayyam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:47:36 np0005540825 youthful_khayyam[80326]: 167 167
Dec  1 04:47:36 np0005540825 systemd[1]: libpod-f85cff2f964915adbf8910de08814b02212d30db87e56c92e303ce6fbff2f23d.scope: Deactivated successfully.
Dec  1 04:47:36 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2803591292' entity='client.admin' 
Dec  1 04:47:36 np0005540825 systemd[1]: libpod-9c50c5a8754a2211fb28a27fcb6032b128f6138c6f5ad1177ecb23f8b9946e3c.scope: Deactivated successfully.
Dec  1 04:47:36 np0005540825 podman[80309]: 2025-12-01 09:47:36.113978794 +0000 UTC m=+0.322180989 container attach f85cff2f964915adbf8910de08814b02212d30db87e56c92e303ce6fbff2f23d (image=quay.io/ceph/ceph:v19, name=youthful_khayyam, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:47:36 np0005540825 podman[80309]: 2025-12-01 09:47:36.114396575 +0000 UTC m=+0.322598740 container died f85cff2f964915adbf8910de08814b02212d30db87e56c92e303ce6fbff2f23d (image=quay.io/ceph/ceph:v19, name=youthful_khayyam, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  1 04:47:36 np0005540825 ceph-mgr[74709]: [progress INFO root] Writing back 1 completed events
Dec  1 04:47:36 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  1 04:47:36 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:36 np0005540825 podman[80243]: 2025-12-01 09:47:36.273110508 +0000 UTC m=+0.789265595 container died 9c50c5a8754a2211fb28a27fcb6032b128f6138c6f5ad1177ecb23f8b9946e3c (image=quay.io/ceph/ceph:v19, name=quizzical_curie, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:47:36 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:36 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:36 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 04:47:36 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:36 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:36 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:36 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:36 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:36 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  1 04:47:36 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/2803591292' entity='client.admin' 
Dec  1 04:47:36 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:36 np0005540825 systemd[1]: var-lib-containers-storage-overlay-563e7a874d0ed1fd7806c474bb1118f8d4406936051c767ab33dc0d89c2dbadc-merged.mount: Deactivated successfully.
Dec  1 04:47:36 np0005540825 podman[80309]: 2025-12-01 09:47:36.303866041 +0000 UTC m=+0.512068216 container remove f85cff2f964915adbf8910de08814b02212d30db87e56c92e303ce6fbff2f23d (image=quay.io/ceph/ceph:v19, name=youthful_khayyam, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  1 04:47:36 np0005540825 systemd[1]: var-lib-containers-storage-overlay-8f3ca636c588941b8323c6b7f081c7618d2a84e24736f03ad8c070892469c050-merged.mount: Deactivated successfully.
Dec  1 04:47:36 np0005540825 podman[80243]: 2025-12-01 09:47:36.329846481 +0000 UTC m=+0.846001568 container remove 9c50c5a8754a2211fb28a27fcb6032b128f6138c6f5ad1177ecb23f8b9946e3c (image=quay.io/ceph/ceph:v19, name=quizzical_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:47:36 np0005540825 systemd[1]: libpod-conmon-9c50c5a8754a2211fb28a27fcb6032b128f6138c6f5ad1177ecb23f8b9946e3c.scope: Deactivated successfully.
Dec  1 04:47:36 np0005540825 systemd[1]: libpod-conmon-f85cff2f964915adbf8910de08814b02212d30db87e56c92e303ce6fbff2f23d.scope: Deactivated successfully.
Dec  1 04:47:36 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:47:36 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:36 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:47:36 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:36 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.fospow (unknown last config time)...
Dec  1 04:47:36 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.fospow (unknown last config time)...
Dec  1 04:47:36 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.fospow", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec  1 04:47:36 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.fospow", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  1 04:47:36 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  1 04:47:36 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  1 04:47:36 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:47:36 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:47:36 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.fospow on compute-0
Dec  1 04:47:36 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.fospow on compute-0
Dec  1 04:47:36 np0005540825 python3[80434]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:47:36 np0005540825 podman[80435]: 2025-12-01 09:47:36.727506046 +0000 UTC m=+0.050625176 container create d0bf3e20326faf78e7aed09cfea367091c8f345557f8ca6d9e6e70a62ecd6c2d (image=quay.io/ceph/ceph:v19, name=jolly_mclean, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  1 04:47:36 np0005540825 systemd[1]: Started libpod-conmon-d0bf3e20326faf78e7aed09cfea367091c8f345557f8ca6d9e6e70a62ecd6c2d.scope.
Dec  1 04:47:36 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:47:36 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8942c597ca65f22ea3017096af9a9f87d7b79b31d6691d47922783629c384eb2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:36 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8942c597ca65f22ea3017096af9a9f87d7b79b31d6691d47922783629c384eb2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:36 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8942c597ca65f22ea3017096af9a9f87d7b79b31d6691d47922783629c384eb2/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:36 np0005540825 podman[80435]: 2025-12-01 09:47:36.706875264 +0000 UTC m=+0.029994484 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:47:36 np0005540825 podman[80435]: 2025-12-01 09:47:36.800527849 +0000 UTC m=+0.123646969 container init d0bf3e20326faf78e7aed09cfea367091c8f345557f8ca6d9e6e70a62ecd6c2d (image=quay.io/ceph/ceph:v19, name=jolly_mclean, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:47:36 np0005540825 podman[80435]: 2025-12-01 09:47:36.807739225 +0000 UTC m=+0.130858345 container start d0bf3e20326faf78e7aed09cfea367091c8f345557f8ca6d9e6e70a62ecd6c2d (image=quay.io/ceph/ceph:v19, name=jolly_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  1 04:47:36 np0005540825 podman[80435]: 2025-12-01 09:47:36.811229055 +0000 UTC m=+0.134348185 container attach d0bf3e20326faf78e7aed09cfea367091c8f345557f8ca6d9e6e70a62ecd6c2d (image=quay.io/ceph/ceph:v19, name=jolly_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  1 04:47:36 np0005540825 podman[80469]: 2025-12-01 09:47:36.846408092 +0000 UTC m=+0.044405116 container create 5fbbaff99bc115ca2e2ec8d89399bbc24d4b62bd9723774e9ece46654a0ef775 (image=quay.io/ceph/ceph:v19, name=mystifying_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:47:36 np0005540825 systemd[1]: Started libpod-conmon-5fbbaff99bc115ca2e2ec8d89399bbc24d4b62bd9723774e9ece46654a0ef775.scope.
Dec  1 04:47:36 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:47:36 np0005540825 podman[80469]: 2025-12-01 09:47:36.920736449 +0000 UTC m=+0.118733523 container init 5fbbaff99bc115ca2e2ec8d89399bbc24d4b62bd9723774e9ece46654a0ef775 (image=quay.io/ceph/ceph:v19, name=mystifying_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec  1 04:47:36 np0005540825 podman[80469]: 2025-12-01 09:47:36.827473704 +0000 UTC m=+0.025470718 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:47:36 np0005540825 podman[80469]: 2025-12-01 09:47:36.934013832 +0000 UTC m=+0.132010866 container start 5fbbaff99bc115ca2e2ec8d89399bbc24d4b62bd9723774e9ece46654a0ef775 (image=quay.io/ceph/ceph:v19, name=mystifying_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:47:36 np0005540825 podman[80469]: 2025-12-01 09:47:36.938139478 +0000 UTC m=+0.136136492 container attach 5fbbaff99bc115ca2e2ec8d89399bbc24d4b62bd9723774e9ece46654a0ef775 (image=quay.io/ceph/ceph:v19, name=mystifying_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:47:36 np0005540825 mystifying_edison[80486]: 167 167
Dec  1 04:47:36 np0005540825 systemd[1]: libpod-5fbbaff99bc115ca2e2ec8d89399bbc24d4b62bd9723774e9ece46654a0ef775.scope: Deactivated successfully.
Dec  1 04:47:36 np0005540825 podman[80469]: 2025-12-01 09:47:36.940777396 +0000 UTC m=+0.138774420 container died 5fbbaff99bc115ca2e2ec8d89399bbc24d4b62bd9723774e9ece46654a0ef775 (image=quay.io/ceph/ceph:v19, name=mystifying_edison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec  1 04:47:36 np0005540825 systemd[1]: var-lib-containers-storage-overlay-e53ceacd85cba46bfa9906ac5dfb09820c089b87446dc99633819eda6790fc49-merged.mount: Deactivated successfully.
Dec  1 04:47:36 np0005540825 podman[80469]: 2025-12-01 09:47:36.973447909 +0000 UTC m=+0.171444913 container remove 5fbbaff99bc115ca2e2ec8d89399bbc24d4b62bd9723774e9ece46654a0ef775 (image=quay.io/ceph/ceph:v19, name=mystifying_edison, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:47:37 np0005540825 systemd[1]: libpod-conmon-5fbbaff99bc115ca2e2ec8d89399bbc24d4b62bd9723774e9ece46654a0ef775.scope: Deactivated successfully.
Dec  1 04:47:37 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:47:37 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:37 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:47:37 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:37 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:47:37 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:47:37 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 04:47:37 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 04:47:37 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 04:47:37 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:37 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Dec  1 04:47:37 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4032594481' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Dec  1 04:47:37 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  1 04:47:37 np0005540825 ceph-mon[74416]: Reconfiguring mon.compute-0 (unknown last config time)...
Dec  1 04:47:37 np0005540825 ceph-mon[74416]: Reconfiguring daemon mon.compute-0 on compute-0
Dec  1 04:47:37 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:37 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:37 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.fospow", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  1 04:47:37 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:37 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:37 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 04:47:37 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:37 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/4032594481' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Dec  1 04:47:37 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:47:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Dec  1 04:47:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  1 04:47:38 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4032594481' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec  1 04:47:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Dec  1 04:47:38 np0005540825 jolly_mclean[80464]: set require_min_compat_client to mimic
Dec  1 04:47:38 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Dec  1 04:47:38 np0005540825 systemd[1]: libpod-d0bf3e20326faf78e7aed09cfea367091c8f345557f8ca6d9e6e70a62ecd6c2d.scope: Deactivated successfully.
Dec  1 04:47:38 np0005540825 podman[80435]: 2025-12-01 09:47:38.096840169 +0000 UTC m=+1.419959299 container died d0bf3e20326faf78e7aed09cfea367091c8f345557f8ca6d9e6e70a62ecd6c2d (image=quay.io/ceph/ceph:v19, name=jolly_mclean, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:47:38 np0005540825 systemd[1]: var-lib-containers-storage-overlay-8942c597ca65f22ea3017096af9a9f87d7b79b31d6691d47922783629c384eb2-merged.mount: Deactivated successfully.
Dec  1 04:47:38 np0005540825 podman[80435]: 2025-12-01 09:47:38.143943664 +0000 UTC m=+1.467062794 container remove d0bf3e20326faf78e7aed09cfea367091c8f345557f8ca6d9e6e70a62ecd6c2d (image=quay.io/ceph/ceph:v19, name=jolly_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:47:38 np0005540825 systemd[1]: libpod-conmon-d0bf3e20326faf78e7aed09cfea367091c8f345557f8ca6d9e6e70a62ecd6c2d.scope: Deactivated successfully.
Dec  1 04:47:38 np0005540825 ceph-mon[74416]: Reconfiguring mgr.compute-0.fospow (unknown last config time)...
Dec  1 04:47:38 np0005540825 ceph-mon[74416]: Reconfiguring daemon mgr.compute-0.fospow on compute-0
Dec  1 04:47:38 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/4032594481' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec  1 04:47:38 np0005540825 python3[80585]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:47:38 np0005540825 podman[80586]: 2025-12-01 09:47:38.835486407 +0000 UTC m=+0.045116715 container create 1d955e7654fa351ab1a9f32c32fb8f137646c0dd01f9fc68840e910020aff74f (image=quay.io/ceph/ceph:v19, name=xenodochial_volhard, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  1 04:47:38 np0005540825 systemd[1]: Started libpod-conmon-1d955e7654fa351ab1a9f32c32fb8f137646c0dd01f9fc68840e910020aff74f.scope.
Dec  1 04:47:38 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:47:38 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c5f777475c4788e5680fcbe70f9d34eaec9b28f5a85da4527c0d6b1eda72b30/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:38 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c5f777475c4788e5680fcbe70f9d34eaec9b28f5a85da4527c0d6b1eda72b30/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:38 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c5f777475c4788e5680fcbe70f9d34eaec9b28f5a85da4527c0d6b1eda72b30/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:38 np0005540825 podman[80586]: 2025-12-01 09:47:38.816446096 +0000 UTC m=+0.026076424 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:47:38 np0005540825 podman[80586]: 2025-12-01 09:47:38.915580302 +0000 UTC m=+0.125210640 container init 1d955e7654fa351ab1a9f32c32fb8f137646c0dd01f9fc68840e910020aff74f (image=quay.io/ceph/ceph:v19, name=xenodochial_volhard, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  1 04:47:38 np0005540825 podman[80586]: 2025-12-01 09:47:38.927715055 +0000 UTC m=+0.137345363 container start 1d955e7654fa351ab1a9f32c32fb8f137646c0dd01f9fc68840e910020aff74f (image=quay.io/ceph/ceph:v19, name=xenodochial_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:47:38 np0005540825 podman[80586]: 2025-12-01 09:47:38.931029321 +0000 UTC m=+0.140659649 container attach 1d955e7654fa351ab1a9f32c32fb8f137646c0dd01f9fc68840e910020aff74f (image=quay.io/ceph/ceph:v19, name=xenodochial_volhard, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:47:39 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  1 04:47:39 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 04:47:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  1 04:47:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  1 04:47:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  1 04:47:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  1 04:47:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:39 np0005540825 ceph-mgr[74709]: [cephadm INFO root] Added host compute-0
Dec  1 04:47:39 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Added host compute-0
Dec  1 04:47:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:47:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:47:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 04:47:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 04:47:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 04:47:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:40 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:40 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:40 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:40 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:40 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 04:47:40 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:41 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  1 04:47:41 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Dec  1 04:47:41 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Dec  1 04:47:41 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:47:41 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:47:41 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:47:41 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:47:41 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:47:41 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:47:41 np0005540825 ceph-mon[74416]: Added host compute-0
Dec  1 04:47:42 np0005540825 ceph-mon[74416]: Deploying cephadm binary to compute-1
Dec  1 04:47:42 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:47:43 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  1 04:47:45 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  1 04:47:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  1 04:47:45 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:45 np0005540825 ceph-mgr[74709]: [cephadm INFO root] Added host compute-1
Dec  1 04:47:45 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Added host compute-1
Dec  1 04:47:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  1 04:47:45 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:46 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  1 04:47:46 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:46 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:46 np0005540825 ceph-mon[74416]: Added host compute-1
Dec  1 04:47:46 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:46 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:46 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Dec  1 04:47:46 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Dec  1 04:47:47 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  1 04:47:47 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  1 04:47:47 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:47 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:47:48 np0005540825 ceph-mon[74416]: Deploying cephadm binary to compute-2
Dec  1 04:47:48 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:49 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  1 04:47:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  1 04:47:50 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:50 np0005540825 ceph-mgr[74709]: [cephadm INFO root] Added host compute-2
Dec  1 04:47:50 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Added host compute-2
Dec  1 04:47:50 np0005540825 ceph-mgr[74709]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Dec  1 04:47:50 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Dec  1 04:47:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec  1 04:47:50 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:50 np0005540825 ceph-mgr[74709]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Dec  1 04:47:50 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Dec  1 04:47:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec  1 04:47:50 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:50 np0005540825 ceph-mgr[74709]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Dec  1 04:47:50 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Dec  1 04:47:50 np0005540825 ceph-mgr[74709]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Dec  1 04:47:50 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Dec  1 04:47:50 np0005540825 ceph-mgr[74709]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Dec  1 04:47:50 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Dec  1 04:47:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Dec  1 04:47:50 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:50 np0005540825 xenodochial_volhard[80601]: Added host 'compute-0' with addr '192.168.122.100'
Dec  1 04:47:50 np0005540825 xenodochial_volhard[80601]: Added host 'compute-1' with addr '192.168.122.101'
Dec  1 04:47:50 np0005540825 xenodochial_volhard[80601]: Added host 'compute-2' with addr '192.168.122.102'
Dec  1 04:47:50 np0005540825 xenodochial_volhard[80601]: Scheduled mon update...
Dec  1 04:47:50 np0005540825 xenodochial_volhard[80601]: Scheduled mgr update...
Dec  1 04:47:50 np0005540825 xenodochial_volhard[80601]: Scheduled osd.default_drive_group update...
Dec  1 04:47:50 np0005540825 systemd[1]: libpod-1d955e7654fa351ab1a9f32c32fb8f137646c0dd01f9fc68840e910020aff74f.scope: Deactivated successfully.
Dec  1 04:47:50 np0005540825 podman[80586]: 2025-12-01 09:47:50.893795019 +0000 UTC m=+12.103425357 container died 1d955e7654fa351ab1a9f32c32fb8f137646c0dd01f9fc68840e910020aff74f (image=quay.io/ceph/ceph:v19, name=xenodochial_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:47:50 np0005540825 systemd[1]: var-lib-containers-storage-overlay-5c5f777475c4788e5680fcbe70f9d34eaec9b28f5a85da4527c0d6b1eda72b30-merged.mount: Deactivated successfully.
Dec  1 04:47:50 np0005540825 podman[80586]: 2025-12-01 09:47:50.937412244 +0000 UTC m=+12.147042592 container remove 1d955e7654fa351ab1a9f32c32fb8f137646c0dd01f9fc68840e910020aff74f (image=quay.io/ceph/ceph:v19, name=xenodochial_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  1 04:47:50 np0005540825 systemd[1]: libpod-conmon-1d955e7654fa351ab1a9f32c32fb8f137646c0dd01f9fc68840e910020aff74f.scope: Deactivated successfully.
Dec  1 04:47:51 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  1 04:47:51 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:51 np0005540825 ceph-mon[74416]: Added host compute-2
Dec  1 04:47:51 np0005540825 ceph-mon[74416]: Saving service mon spec with placement compute-0;compute-1;compute-2
Dec  1 04:47:51 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:51 np0005540825 ceph-mon[74416]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Dec  1 04:47:51 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:51 np0005540825 ceph-mon[74416]: Marking host: compute-0 for OSDSpec preview refresh.
Dec  1 04:47:51 np0005540825 ceph-mon[74416]: Marking host: compute-1 for OSDSpec preview refresh.
Dec  1 04:47:51 np0005540825 ceph-mon[74416]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Dec  1 04:47:51 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:47:51 np0005540825 python3[80758]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:47:52 np0005540825 podman[80760]: 2025-12-01 09:47:52.085993294 +0000 UTC m=+0.068085577 container create 5f51106dc27b9130457bf23b02b070416ed50736475539b25062696a36ea05b2 (image=quay.io/ceph/ceph:v19, name=gallant_heyrovsky, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:47:52 np0005540825 systemd[1]: Started libpod-conmon-5f51106dc27b9130457bf23b02b070416ed50736475539b25062696a36ea05b2.scope.
Dec  1 04:47:52 np0005540825 podman[80760]: 2025-12-01 09:47:52.058004723 +0000 UTC m=+0.040097046 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:47:52 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:47:52 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39285bb038b378aa1755429c3f3df3ca6d42f31cdf10a46fd15a04b8dde0584a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:52 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39285bb038b378aa1755429c3f3df3ca6d42f31cdf10a46fd15a04b8dde0584a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:52 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39285bb038b378aa1755429c3f3df3ca6d42f31cdf10a46fd15a04b8dde0584a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:47:52 np0005540825 podman[80760]: 2025-12-01 09:47:52.185722646 +0000 UTC m=+0.167814899 container init 5f51106dc27b9130457bf23b02b070416ed50736475539b25062696a36ea05b2 (image=quay.io/ceph/ceph:v19, name=gallant_heyrovsky, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  1 04:47:52 np0005540825 podman[80760]: 2025-12-01 09:47:52.193362133 +0000 UTC m=+0.175454376 container start 5f51106dc27b9130457bf23b02b070416ed50736475539b25062696a36ea05b2 (image=quay.io/ceph/ceph:v19, name=gallant_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:47:52 np0005540825 podman[80760]: 2025-12-01 09:47:52.199961593 +0000 UTC m=+0.182053836 container attach 5f51106dc27b9130457bf23b02b070416ed50736475539b25062696a36ea05b2 (image=quay.io/ceph/ceph:v19, name=gallant_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  1 04:47:52 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec  1 04:47:52 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2624500201' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  1 04:47:52 np0005540825 gallant_heyrovsky[80776]: 
Dec  1 04:47:52 np0005540825 gallant_heyrovsky[80776]: {"fsid":"365f19c2-81e5-5edd-b6b4-280555214d3a","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":59,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-12-01T09:46:50:475394+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-12-01T09:46:50.478454+0000","services":{}},"progress_events":{}}
Dec  1 04:47:52 np0005540825 systemd[1]: libpod-5f51106dc27b9130457bf23b02b070416ed50736475539b25062696a36ea05b2.scope: Deactivated successfully.
Dec  1 04:47:52 np0005540825 podman[80760]: 2025-12-01 09:47:52.627093717 +0000 UTC m=+0.609185970 container died 5f51106dc27b9130457bf23b02b070416ed50736475539b25062696a36ea05b2 (image=quay.io/ceph/ceph:v19, name=gallant_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default)
Dec  1 04:47:52 np0005540825 systemd[1]: var-lib-containers-storage-overlay-39285bb038b378aa1755429c3f3df3ca6d42f31cdf10a46fd15a04b8dde0584a-merged.mount: Deactivated successfully.
Dec  1 04:47:52 np0005540825 podman[80760]: 2025-12-01 09:47:52.662693335 +0000 UTC m=+0.644785598 container remove 5f51106dc27b9130457bf23b02b070416ed50736475539b25062696a36ea05b2 (image=quay.io/ceph/ceph:v19, name=gallant_heyrovsky, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:47:52 np0005540825 systemd[1]: libpod-conmon-5f51106dc27b9130457bf23b02b070416ed50736475539b25062696a36ea05b2.scope: Deactivated successfully.
Dec  1 04:47:52 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:47:53 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  1 04:47:55 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  1 04:47:57 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  1 04:47:57 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:47:59 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  1 04:48:01 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  1 04:48:02 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:48:03 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  1 04:48:05 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  1 04:48:07 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  1 04:48:07 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  1 04:48:07 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:07 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  1 04:48:07 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:07 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  1 04:48:07 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:07 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  1 04:48:07 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:07 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec  1 04:48:07 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  1 04:48:07 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:48:07 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:48:07 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 04:48:07 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 04:48:07 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec  1 04:48:07 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec  1 04:48:07 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:48:08 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.conf
Dec  1 04:48:08 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.conf
Dec  1 04:48:08 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:08 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:08 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:08 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:08 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  1 04:48:08 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 04:48:08 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  1 04:48:08 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  1 04:48:09 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  1 04:48:09 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.client.admin.keyring
Dec  1 04:48:09 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.client.admin.keyring
Dec  1 04:48:09 np0005540825 ceph-mon[74416]: Updating compute-1:/etc/ceph/ceph.conf
Dec  1 04:48:09 np0005540825 ceph-mon[74416]: Updating compute-1:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.conf
Dec  1 04:48:09 np0005540825 ceph-mon[74416]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  1 04:48:09 np0005540825 ceph-mon[74416]: Updating compute-1:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.client.admin.keyring
Dec  1 04:48:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  1 04:48:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  1 04:48:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 04:48:10 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:10 np0005540825 ceph-mgr[74709]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec  1 04:48:10 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec  1 04:48:10 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  1 04:48:10 np0005540825 ceph-mgr[74709]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec  1 04:48:10 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec  1 04:48:10 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  1 04:48:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:48:10.034+0000 7f9863907640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Dec  1 04:48:10 np0005540825 ceph-mgr[74709]: [progress INFO root] update: starting ev 53d813aa-69d7-43b8-8782-ac328e0c1a90 (Updating crash deployment (+1 -> 2))
Dec  1 04:48:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: service_name: mon
Dec  1 04:48:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: placement:
Dec  1 04:48:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]:  hosts:
Dec  1 04:48:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]:  - compute-0
Dec  1 04:48:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]:  - compute-1
Dec  1 04:48:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]:  - compute-2
Dec  1 04:48:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec  1 04:48:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:48:10.035+0000 7f9863907640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Dec  1 04:48:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: service_name: mgr
Dec  1 04:48:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: placement:
Dec  1 04:48:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]:  hosts:
Dec  1 04:48:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]:  - compute-0
Dec  1 04:48:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]:  - compute-1
Dec  1 04:48:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]:  - compute-2
Dec  1 04:48:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec  1 04:48:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec  1 04:48:10 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  1 04:48:10 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  1 04:48:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:48:10 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:48:10 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Dec  1 04:48:10 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Dec  1 04:48:10 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:10 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:10 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:10 np0005540825 ceph-mon[74416]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec  1 04:48:10 np0005540825 ceph-mon[74416]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec  1 04:48:10 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  1 04:48:10 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  1 04:48:10 np0005540825 ceph-mon[74416]: Deploying daemon crash.compute-1 on compute-1
Dec  1 04:48:11 np0005540825 ceph-mon[74416]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Dec  1 04:48:11 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_09:48:11
Dec  1 04:48:11 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 04:48:11 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 04:48:11 np0005540825 ceph-mgr[74709]: [balancer INFO root] No pools available
Dec  1 04:48:11 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 04:48:11 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 04:48:11 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:48:11 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:48:11 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 04:48:11 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:48:11 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:48:11 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:48:11 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:48:11 np0005540825 ceph-mon[74416]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Dec  1 04:48:12 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  1 04:48:12 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  1 04:48:12 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:12 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  1 04:48:12 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:12 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec  1 04:48:12 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:12 np0005540825 ceph-mgr[74709]: [progress INFO root] complete: finished ev 53d813aa-69d7-43b8-8782-ac328e0c1a90 (Updating crash deployment (+1 -> 2))
Dec  1 04:48:12 np0005540825 ceph-mgr[74709]: [progress INFO root] Completed event 53d813aa-69d7-43b8-8782-ac328e0c1a90 (Updating crash deployment (+1 -> 2)) in 2 seconds
Dec  1 04:48:12 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec  1 04:48:12 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:12 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 04:48:12 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 04:48:12 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 04:48:12 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 04:48:12 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:48:12 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:48:12 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 04:48:12 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 04:48:12 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:48:12 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:48:12 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:48:13 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:13 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:13 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:13 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:13 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 04:48:13 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 04:48:13 np0005540825 podman[80903]: 2025-12-01 09:48:13.167016356 +0000 UTC m=+0.038158864 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:48:13 np0005540825 podman[80903]: 2025-12-01 09:48:13.289750389 +0000 UTC m=+0.160892837 container create a4a1d9f193dd0312e7c8802886da1d5bbffde2c21198ffec4deb50d63df9af65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_noether, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  1 04:48:13 np0005540825 systemd[1]: Started libpod-conmon-a4a1d9f193dd0312e7c8802886da1d5bbffde2c21198ffec4deb50d63df9af65.scope.
Dec  1 04:48:13 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:48:13 np0005540825 podman[80903]: 2025-12-01 09:48:13.384872744 +0000 UTC m=+0.256015202 container init a4a1d9f193dd0312e7c8802886da1d5bbffde2c21198ffec4deb50d63df9af65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_noether, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec  1 04:48:13 np0005540825 podman[80903]: 2025-12-01 09:48:13.394538426 +0000 UTC m=+0.265680884 container start a4a1d9f193dd0312e7c8802886da1d5bbffde2c21198ffec4deb50d63df9af65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:48:13 np0005540825 competent_noether[80919]: 167 167
Dec  1 04:48:13 np0005540825 systemd[1]: libpod-a4a1d9f193dd0312e7c8802886da1d5bbffde2c21198ffec4deb50d63df9af65.scope: Deactivated successfully.
Dec  1 04:48:13 np0005540825 podman[80903]: 2025-12-01 09:48:13.45340362 +0000 UTC m=+0.324546048 container attach a4a1d9f193dd0312e7c8802886da1d5bbffde2c21198ffec4deb50d63df9af65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_noether, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:48:13 np0005540825 podman[80903]: 2025-12-01 09:48:13.453845652 +0000 UTC m=+0.324988080 container died a4a1d9f193dd0312e7c8802886da1d5bbffde2c21198ffec4deb50d63df9af65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_noether, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  1 04:48:13 np0005540825 systemd[1]: var-lib-containers-storage-overlay-e125aacc3c1818759b160d1030fa37b29674f84d6e49a5bf50f2b8204da9ee63-merged.mount: Deactivated successfully.
Dec  1 04:48:13 np0005540825 podman[80903]: 2025-12-01 09:48:13.573675736 +0000 UTC m=+0.444818194 container remove a4a1d9f193dd0312e7c8802886da1d5bbffde2c21198ffec4deb50d63df9af65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:48:13 np0005540825 systemd[1]: libpod-conmon-a4a1d9f193dd0312e7c8802886da1d5bbffde2c21198ffec4deb50d63df9af65.scope: Deactivated successfully.
Dec  1 04:48:13 np0005540825 podman[80946]: 2025-12-01 09:48:13.765064547 +0000 UTC m=+0.039074829 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:48:13 np0005540825 podman[80946]: 2025-12-01 09:48:13.871365446 +0000 UTC m=+0.145375698 container create ce34abd5ccdaca88394c4a16443ac24a3e8a2bc3d8ad9ecb7dc0e1c81b8ec5d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:48:13 np0005540825 systemd[1]: Started libpod-conmon-ce34abd5ccdaca88394c4a16443ac24a3e8a2bc3d8ad9ecb7dc0e1c81b8ec5d6.scope.
Dec  1 04:48:13 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:48:13 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90e74f71898deb9072bde224b1eb5c268919a9f8b632ded565c2a806fc10ab39/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:13 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90e74f71898deb9072bde224b1eb5c268919a9f8b632ded565c2a806fc10ab39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:13 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90e74f71898deb9072bde224b1eb5c268919a9f8b632ded565c2a806fc10ab39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:13 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90e74f71898deb9072bde224b1eb5c268919a9f8b632ded565c2a806fc10ab39/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:13 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90e74f71898deb9072bde224b1eb5c268919a9f8b632ded565c2a806fc10ab39/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:13 np0005540825 podman[80946]: 2025-12-01 09:48:13.988683562 +0000 UTC m=+0.262693844 container init ce34abd5ccdaca88394c4a16443ac24a3e8a2bc3d8ad9ecb7dc0e1c81b8ec5d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_williams, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  1 04:48:14 np0005540825 podman[80946]: 2025-12-01 09:48:14.004424228 +0000 UTC m=+0.278434470 container start ce34abd5ccdaca88394c4a16443ac24a3e8a2bc3d8ad9ecb7dc0e1c81b8ec5d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:48:14 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  1 04:48:14 np0005540825 podman[80946]: 2025-12-01 09:48:14.132839914 +0000 UTC m=+0.406850206 container attach ce34abd5ccdaca88394c4a16443ac24a3e8a2bc3d8ad9ecb7dc0e1c81b8ec5d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_williams, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  1 04:48:14 np0005540825 admiring_williams[80962]: --> passed data devices: 0 physical, 1 LVM
Dec  1 04:48:14 np0005540825 admiring_williams[80962]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  1 04:48:14 np0005540825 admiring_williams[80962]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  1 04:48:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "a81d93fb-5215-4a2c-87f7-124573e3e396"} v 0)
Dec  1 04:48:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/3529352385' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a81d93fb-5215-4a2c-87f7-124573e3e396"}]: dispatch
Dec  1 04:48:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Dec  1 04:48:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  1 04:48:14 np0005540825 admiring_williams[80962]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 0faa9895-0b70-4c34-8548-ef8fc62fc047
Dec  1 04:48:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/3529352385' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a81d93fb-5215-4a2c-87f7-124573e3e396"}]': finished
Dec  1 04:48:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Dec  1 04:48:14 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Dec  1 04:48:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  1 04:48:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  1 04:48:14 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  1 04:48:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "0faa9895-0b70-4c34-8548-ef8fc62fc047"} v 0)
Dec  1 04:48:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/61696110' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0faa9895-0b70-4c34-8548-ef8fc62fc047"}]: dispatch
Dec  1 04:48:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Dec  1 04:48:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  1 04:48:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/61696110' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0faa9895-0b70-4c34-8548-ef8fc62fc047"}]': finished
Dec  1 04:48:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Dec  1 04:48:14 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Dec  1 04:48:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  1 04:48:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  1 04:48:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  1 04:48:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  1 04:48:14 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  1 04:48:14 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  1 04:48:15 np0005540825 admiring_williams[80962]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Dec  1 04:48:15 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.101:0/3529352385' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a81d93fb-5215-4a2c-87f7-124573e3e396"}]: dispatch
Dec  1 04:48:15 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.101:0/3529352385' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a81d93fb-5215-4a2c-87f7-124573e3e396"}]': finished
Dec  1 04:48:15 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/61696110' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0faa9895-0b70-4c34-8548-ef8fc62fc047"}]: dispatch
Dec  1 04:48:15 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/61696110' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0faa9895-0b70-4c34-8548-ef8fc62fc047"}]': finished
Dec  1 04:48:15 np0005540825 admiring_williams[80962]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Dec  1 04:48:15 np0005540825 admiring_williams[80962]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  1 04:48:15 np0005540825 admiring_williams[80962]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec  1 04:48:15 np0005540825 admiring_williams[80962]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Dec  1 04:48:15 np0005540825 lvm[81026]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 04:48:15 np0005540825 lvm[81026]: VG ceph_vg0 finished
Dec  1 04:48:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec  1 04:48:15 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3203458709' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec  1 04:48:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec  1 04:48:15 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1374989353' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec  1 04:48:15 np0005540825 admiring_williams[80962]: stderr: got monmap epoch 1
Dec  1 04:48:15 np0005540825 admiring_williams[80962]: --> Creating keyring file for osd.1
Dec  1 04:48:15 np0005540825 admiring_williams[80962]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Dec  1 04:48:15 np0005540825 admiring_williams[80962]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Dec  1 04:48:15 np0005540825 admiring_williams[80962]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 0faa9895-0b70-4c34-8548-ef8fc62fc047 --setuser ceph --setgroup ceph
Dec  1 04:48:16 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  1 04:48:16 np0005540825 ceph-mon[74416]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec  1 04:48:16 np0005540825 ceph-mgr[74709]: [progress INFO root] Writing back 2 completed events
Dec  1 04:48:16 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  1 04:48:16 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:17 np0005540825 ceph-mon[74416]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec  1 04:48:17 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:17 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:48:18 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  1 04:48:18 np0005540825 admiring_williams[80962]: stderr: 2025-12-01T09:48:15.733+0000 7f9a6f566740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Dec  1 04:48:18 np0005540825 admiring_williams[80962]: stderr: 2025-12-01T09:48:15.995+0000 7f9a6f566740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Dec  1 04:48:18 np0005540825 admiring_williams[80962]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Dec  1 04:48:18 np0005540825 admiring_williams[80962]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  1 04:48:18 np0005540825 admiring_williams[80962]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec  1 04:48:18 np0005540825 admiring_williams[80962]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec  1 04:48:18 np0005540825 admiring_williams[80962]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec  1 04:48:18 np0005540825 admiring_williams[80962]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  1 04:48:18 np0005540825 admiring_williams[80962]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  1 04:48:19 np0005540825 admiring_williams[80962]: --> ceph-volume lvm activate successful for osd ID: 1
Dec  1 04:48:19 np0005540825 admiring_williams[80962]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Dec  1 04:48:19 np0005540825 systemd[1]: libpod-ce34abd5ccdaca88394c4a16443ac24a3e8a2bc3d8ad9ecb7dc0e1c81b8ec5d6.scope: Deactivated successfully.
Dec  1 04:48:19 np0005540825 systemd[1]: libpod-ce34abd5ccdaca88394c4a16443ac24a3e8a2bc3d8ad9ecb7dc0e1c81b8ec5d6.scope: Consumed 2.414s CPU time.
Dec  1 04:48:19 np0005540825 podman[80946]: 2025-12-01 09:48:19.057484282 +0000 UTC m=+5.331494514 container died ce34abd5ccdaca88394c4a16443ac24a3e8a2bc3d8ad9ecb7dc0e1c81b8ec5d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_williams, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:48:19 np0005540825 systemd[1]: var-lib-containers-storage-overlay-90e74f71898deb9072bde224b1eb5c268919a9f8b632ded565c2a806fc10ab39-merged.mount: Deactivated successfully.
Dec  1 04:48:19 np0005540825 podman[80946]: 2025-12-01 09:48:19.119891602 +0000 UTC m=+5.393901834 container remove ce34abd5ccdaca88394c4a16443ac24a3e8a2bc3d8ad9ecb7dc0e1c81b8ec5d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_williams, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:48:19 np0005540825 systemd[1]: libpod-conmon-ce34abd5ccdaca88394c4a16443ac24a3e8a2bc3d8ad9ecb7dc0e1c81b8ec5d6.scope: Deactivated successfully.
Dec  1 04:48:19 np0005540825 podman[82051]: 2025-12-01 09:48:19.759390415 +0000 UTC m=+0.045839482 container create 17ca0c77a526a8e25ecf7dabaf6e7fea8ec39ec9629a3e3d68fe3a72945db72a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_hermann, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  1 04:48:19 np0005540825 systemd[1]: Started libpod-conmon-17ca0c77a526a8e25ecf7dabaf6e7fea8ec39ec9629a3e3d68fe3a72945db72a.scope.
Dec  1 04:48:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Dec  1 04:48:19 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec  1 04:48:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:48:19 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:48:19 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-1
Dec  1 04:48:19 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-1
Dec  1 04:48:19 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:48:19 np0005540825 podman[82051]: 2025-12-01 09:48:19.738716826 +0000 UTC m=+0.025165923 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:48:19 np0005540825 podman[82051]: 2025-12-01 09:48:19.841260462 +0000 UTC m=+0.127709539 container init 17ca0c77a526a8e25ecf7dabaf6e7fea8ec39ec9629a3e3d68fe3a72945db72a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_hermann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:48:19 np0005540825 podman[82051]: 2025-12-01 09:48:19.854099989 +0000 UTC m=+0.140549056 container start 17ca0c77a526a8e25ecf7dabaf6e7fea8ec39ec9629a3e3d68fe3a72945db72a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_hermann, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:48:19 np0005540825 podman[82051]: 2025-12-01 09:48:19.858374185 +0000 UTC m=+0.144823252 container attach 17ca0c77a526a8e25ecf7dabaf6e7fea8ec39ec9629a3e3d68fe3a72945db72a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  1 04:48:19 np0005540825 condescending_hermann[82068]: 167 167
Dec  1 04:48:19 np0005540825 systemd[1]: libpod-17ca0c77a526a8e25ecf7dabaf6e7fea8ec39ec9629a3e3d68fe3a72945db72a.scope: Deactivated successfully.
Dec  1 04:48:19 np0005540825 podman[82051]: 2025-12-01 09:48:19.862016084 +0000 UTC m=+0.148465181 container died 17ca0c77a526a8e25ecf7dabaf6e7fea8ec39ec9629a3e3d68fe3a72945db72a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_hermann, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  1 04:48:19 np0005540825 systemd[1]: var-lib-containers-storage-overlay-9c919358cadfcf7d30b8965a91cfecf0930956e0c568c470cf1c2c6bca2f87f5-merged.mount: Deactivated successfully.
Dec  1 04:48:19 np0005540825 podman[82051]: 2025-12-01 09:48:19.900856455 +0000 UTC m=+0.187305562 container remove 17ca0c77a526a8e25ecf7dabaf6e7fea8ec39ec9629a3e3d68fe3a72945db72a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_hermann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:48:19 np0005540825 systemd[1]: libpod-conmon-17ca0c77a526a8e25ecf7dabaf6e7fea8ec39ec9629a3e3d68fe3a72945db72a.scope: Deactivated successfully.
Dec  1 04:48:20 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  1 04:48:20 np0005540825 podman[82093]: 2025-12-01 09:48:20.079134352 +0000 UTC m=+0.050356614 container create b30a7a1a4ea6d1c09bdda110f5805efdea10bbc19d296fa31ee67f4c83efc973 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_hypatia, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  1 04:48:20 np0005540825 systemd[1]: Started libpod-conmon-b30a7a1a4ea6d1c09bdda110f5805efdea10bbc19d296fa31ee67f4c83efc973.scope.
Dec  1 04:48:20 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:48:20 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c0d8cd1971fc535efc7e794035092347ee5959c18bd483d78db87c26b68c059/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:20 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c0d8cd1971fc535efc7e794035092347ee5959c18bd483d78db87c26b68c059/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:20 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c0d8cd1971fc535efc7e794035092347ee5959c18bd483d78db87c26b68c059/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:20 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c0d8cd1971fc535efc7e794035092347ee5959c18bd483d78db87c26b68c059/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:20 np0005540825 podman[82093]: 2025-12-01 09:48:20.059280715 +0000 UTC m=+0.030503027 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:48:20 np0005540825 podman[82093]: 2025-12-01 09:48:20.162372966 +0000 UTC m=+0.133595258 container init b30a7a1a4ea6d1c09bdda110f5805efdea10bbc19d296fa31ee67f4c83efc973 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_hypatia, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:48:20 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec  1 04:48:20 np0005540825 podman[82093]: 2025-12-01 09:48:20.174155775 +0000 UTC m=+0.145378027 container start b30a7a1a4ea6d1c09bdda110f5805efdea10bbc19d296fa31ee67f4c83efc973 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_hypatia, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec  1 04:48:20 np0005540825 podman[82093]: 2025-12-01 09:48:20.177943747 +0000 UTC m=+0.149166099 container attach b30a7a1a4ea6d1c09bdda110f5805efdea10bbc19d296fa31ee67f4c83efc973 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_hypatia, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:48:20 np0005540825 cool_hypatia[82109]: {
Dec  1 04:48:20 np0005540825 cool_hypatia[82109]:    "1": [
Dec  1 04:48:20 np0005540825 cool_hypatia[82109]:        {
Dec  1 04:48:20 np0005540825 cool_hypatia[82109]:            "devices": [
Dec  1 04:48:20 np0005540825 cool_hypatia[82109]:                "/dev/loop3"
Dec  1 04:48:20 np0005540825 cool_hypatia[82109]:            ],
Dec  1 04:48:20 np0005540825 cool_hypatia[82109]:            "lv_name": "ceph_lv0",
Dec  1 04:48:20 np0005540825 cool_hypatia[82109]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 04:48:20 np0005540825 cool_hypatia[82109]:            "lv_size": "21470642176",
Dec  1 04:48:20 np0005540825 cool_hypatia[82109]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 04:48:20 np0005540825 cool_hypatia[82109]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 04:48:20 np0005540825 cool_hypatia[82109]:            "name": "ceph_lv0",
Dec  1 04:48:20 np0005540825 cool_hypatia[82109]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 04:48:20 np0005540825 cool_hypatia[82109]:            "tags": {
Dec  1 04:48:20 np0005540825 cool_hypatia[82109]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 04:48:20 np0005540825 cool_hypatia[82109]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 04:48:20 np0005540825 cool_hypatia[82109]:                "ceph.cephx_lockbox_secret": "",
Dec  1 04:48:20 np0005540825 cool_hypatia[82109]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 04:48:20 np0005540825 cool_hypatia[82109]:                "ceph.cluster_name": "ceph",
Dec  1 04:48:20 np0005540825 cool_hypatia[82109]:                "ceph.crush_device_class": "",
Dec  1 04:48:20 np0005540825 cool_hypatia[82109]:                "ceph.encrypted": "0",
Dec  1 04:48:20 np0005540825 cool_hypatia[82109]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 04:48:20 np0005540825 cool_hypatia[82109]:                "ceph.osd_id": "1",
Dec  1 04:48:20 np0005540825 cool_hypatia[82109]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 04:48:20 np0005540825 cool_hypatia[82109]:                "ceph.type": "block",
Dec  1 04:48:20 np0005540825 cool_hypatia[82109]:                "ceph.vdo": "0",
Dec  1 04:48:20 np0005540825 cool_hypatia[82109]:                "ceph.with_tpm": "0"
Dec  1 04:48:20 np0005540825 cool_hypatia[82109]:            },
Dec  1 04:48:20 np0005540825 cool_hypatia[82109]:            "type": "block",
Dec  1 04:48:20 np0005540825 cool_hypatia[82109]:            "vg_name": "ceph_vg0"
Dec  1 04:48:20 np0005540825 cool_hypatia[82109]:        }
Dec  1 04:48:20 np0005540825 cool_hypatia[82109]:    ]
Dec  1 04:48:20 np0005540825 cool_hypatia[82109]: }
Dec  1 04:48:20 np0005540825 systemd[1]: libpod-b30a7a1a4ea6d1c09bdda110f5805efdea10bbc19d296fa31ee67f4c83efc973.scope: Deactivated successfully.
Dec  1 04:48:20 np0005540825 podman[82093]: 2025-12-01 09:48:20.501649941 +0000 UTC m=+0.472872243 container died b30a7a1a4ea6d1c09bdda110f5805efdea10bbc19d296fa31ee67f4c83efc973 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  1 04:48:20 np0005540825 systemd[1]: var-lib-containers-storage-overlay-9c0d8cd1971fc535efc7e794035092347ee5959c18bd483d78db87c26b68c059-merged.mount: Deactivated successfully.
Dec  1 04:48:20 np0005540825 podman[82093]: 2025-12-01 09:48:20.555823448 +0000 UTC m=+0.527045720 container remove b30a7a1a4ea6d1c09bdda110f5805efdea10bbc19d296fa31ee67f4c83efc973 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_hypatia, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  1 04:48:20 np0005540825 systemd[1]: libpod-conmon-b30a7a1a4ea6d1c09bdda110f5805efdea10bbc19d296fa31ee67f4c83efc973.scope: Deactivated successfully.
Dec  1 04:48:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Dec  1 04:48:20 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec  1 04:48:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:48:20 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:48:20 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Dec  1 04:48:20 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Dec  1 04:48:21 np0005540825 podman[82223]: 2025-12-01 09:48:21.143981932 +0000 UTC m=+0.038494293 container create 3488250106d36dd1cac685287ae6c8d6f019b6af38ca7d2511c43c045258f1f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_ishizaka, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  1 04:48:21 np0005540825 systemd[1]: Started libpod-conmon-3488250106d36dd1cac685287ae6c8d6f019b6af38ca7d2511c43c045258f1f5.scope.
Dec  1 04:48:21 np0005540825 ceph-mon[74416]: Deploying daemon osd.0 on compute-1
Dec  1 04:48:21 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec  1 04:48:21 np0005540825 podman[82223]: 2025-12-01 09:48:21.126956721 +0000 UTC m=+0.021469072 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:48:21 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:48:21 np0005540825 podman[82223]: 2025-12-01 09:48:21.240931176 +0000 UTC m=+0.135443527 container init 3488250106d36dd1cac685287ae6c8d6f019b6af38ca7d2511c43c045258f1f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_ishizaka, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:48:21 np0005540825 podman[82223]: 2025-12-01 09:48:21.247498643 +0000 UTC m=+0.142011014 container start 3488250106d36dd1cac685287ae6c8d6f019b6af38ca7d2511c43c045258f1f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_ishizaka, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  1 04:48:21 np0005540825 podman[82223]: 2025-12-01 09:48:21.251487031 +0000 UTC m=+0.145999402 container attach 3488250106d36dd1cac685287ae6c8d6f019b6af38ca7d2511c43c045258f1f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_ishizaka, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:48:21 np0005540825 brave_ishizaka[82239]: 167 167
Dec  1 04:48:21 np0005540825 systemd[1]: libpod-3488250106d36dd1cac685287ae6c8d6f019b6af38ca7d2511c43c045258f1f5.scope: Deactivated successfully.
Dec  1 04:48:21 np0005540825 podman[82223]: 2025-12-01 09:48:21.252446987 +0000 UTC m=+0.146959328 container died 3488250106d36dd1cac685287ae6c8d6f019b6af38ca7d2511c43c045258f1f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_ishizaka, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:48:21 np0005540825 systemd[1]: var-lib-containers-storage-overlay-46db254a4b207ea4b4de872110060c995cf63da55b6546bbecfb51e759795a1d-merged.mount: Deactivated successfully.
Dec  1 04:48:21 np0005540825 podman[82223]: 2025-12-01 09:48:21.286040697 +0000 UTC m=+0.180553028 container remove 3488250106d36dd1cac685287ae6c8d6f019b6af38ca7d2511c43c045258f1f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_ishizaka, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:48:21 np0005540825 systemd[1]: libpod-conmon-3488250106d36dd1cac685287ae6c8d6f019b6af38ca7d2511c43c045258f1f5.scope: Deactivated successfully.
Dec  1 04:48:21 np0005540825 podman[82270]: 2025-12-01 09:48:21.61341821 +0000 UTC m=+0.053675804 container create a071b1b2a81cd77042297a3f897b0e6522b2317ac052186b5ca942580f26d865 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-osd-1-activate-test, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:48:21 np0005540825 systemd[1]: Started libpod-conmon-a071b1b2a81cd77042297a3f897b0e6522b2317ac052186b5ca942580f26d865.scope.
Dec  1 04:48:21 np0005540825 podman[82270]: 2025-12-01 09:48:21.591200449 +0000 UTC m=+0.031458063 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:48:21 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:48:21 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14d6d9d7a981110e7752717985df51048d2036ce52a490a0f6ad74d96da65b67/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:21 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14d6d9d7a981110e7752717985df51048d2036ce52a490a0f6ad74d96da65b67/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:21 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14d6d9d7a981110e7752717985df51048d2036ce52a490a0f6ad74d96da65b67/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:21 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14d6d9d7a981110e7752717985df51048d2036ce52a490a0f6ad74d96da65b67/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:21 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14d6d9d7a981110e7752717985df51048d2036ce52a490a0f6ad74d96da65b67/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:21 np0005540825 podman[82270]: 2025-12-01 09:48:21.713755437 +0000 UTC m=+0.154013061 container init a071b1b2a81cd77042297a3f897b0e6522b2317ac052186b5ca942580f26d865 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-osd-1-activate-test, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:48:21 np0005540825 podman[82270]: 2025-12-01 09:48:21.71978287 +0000 UTC m=+0.160040454 container start a071b1b2a81cd77042297a3f897b0e6522b2317ac052186b5ca942580f26d865 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:48:21 np0005540825 podman[82270]: 2025-12-01 09:48:21.722907075 +0000 UTC m=+0.163164669 container attach a071b1b2a81cd77042297a3f897b0e6522b2317ac052186b5ca942580f26d865 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-osd-1-activate-test, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:48:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-osd-1-activate-test[82286]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Dec  1 04:48:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-osd-1-activate-test[82286]:                            [--no-systemd] [--no-tmpfs]
Dec  1 04:48:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-osd-1-activate-test[82286]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec  1 04:48:21 np0005540825 systemd[1]: libpod-a071b1b2a81cd77042297a3f897b0e6522b2317ac052186b5ca942580f26d865.scope: Deactivated successfully.
Dec  1 04:48:21 np0005540825 podman[82270]: 2025-12-01 09:48:21.895908418 +0000 UTC m=+0.336166022 container died a071b1b2a81cd77042297a3f897b0e6522b2317ac052186b5ca942580f26d865 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-osd-1-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:48:21 np0005540825 systemd[1]: var-lib-containers-storage-overlay-14d6d9d7a981110e7752717985df51048d2036ce52a490a0f6ad74d96da65b67-merged.mount: Deactivated successfully.
Dec  1 04:48:21 np0005540825 podman[82270]: 2025-12-01 09:48:21.943136377 +0000 UTC m=+0.383393961 container remove a071b1b2a81cd77042297a3f897b0e6522b2317ac052186b5ca942580f26d865 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-osd-1-activate-test, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:48:21 np0005540825 systemd[1]: libpod-conmon-a071b1b2a81cd77042297a3f897b0e6522b2317ac052186b5ca942580f26d865.scope: Deactivated successfully.
Dec  1 04:48:22 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  1 04:48:22 np0005540825 ceph-mon[74416]: Deploying daemon osd.1 on compute-0
Dec  1 04:48:22 np0005540825 systemd[1]: Reloading.
Dec  1 04:48:22 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:48:22 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:48:22 np0005540825 systemd[1]: Reloading.
Dec  1 04:48:22 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:48:22 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:48:22 np0005540825 systemd[1]: Starting Ceph osd.1 for 365f19c2-81e5-5edd-b6b4-280555214d3a...
Dec  1 04:48:22 np0005540825 python3[82440]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:48:22 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:48:23 np0005540825 podman[82476]: 2025-12-01 09:48:23.03027511 +0000 UTC m=+0.039374417 container create 943dc7d348082da6cbb4328bdff94a715352186045f3fd17a9a1dc684f409055 (image=quay.io/ceph/ceph:v19, name=stupefied_archimedes, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:48:23 np0005540825 podman[82474]: 2025-12-01 09:48:23.035517612 +0000 UTC m=+0.051450874 container create 86a147e60a461813f7ff22de96f2c4c23c1a33b633f084a81847c9c11e68ae49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-osd-1-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:48:23 np0005540825 systemd[1]: Started libpod-conmon-943dc7d348082da6cbb4328bdff94a715352186045f3fd17a9a1dc684f409055.scope.
Dec  1 04:48:23 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:48:23 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00cc8206f0010c700a33b88cd4c2c5e5264174cfa93e0c7aab73b59a8f0a6955/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:23 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00cc8206f0010c700a33b88cd4c2c5e5264174cfa93e0c7aab73b59a8f0a6955/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:23 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00cc8206f0010c700a33b88cd4c2c5e5264174cfa93e0c7aab73b59a8f0a6955/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:23 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00cc8206f0010c700a33b88cd4c2c5e5264174cfa93e0c7aab73b59a8f0a6955/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:23 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00cc8206f0010c700a33b88cd4c2c5e5264174cfa93e0c7aab73b59a8f0a6955/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:23 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:48:23 np0005540825 podman[82474]: 2025-12-01 09:48:23.095914687 +0000 UTC m=+0.111847919 container init 86a147e60a461813f7ff22de96f2c4c23c1a33b633f084a81847c9c11e68ae49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-osd-1-activate, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec  1 04:48:23 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd6af6b78d31a4c473891da3d78d58b414b2b29c089db5e9029d9f03138a4f03/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:23 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd6af6b78d31a4c473891da3d78d58b414b2b29c089db5e9029d9f03138a4f03/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:23 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd6af6b78d31a4c473891da3d78d58b414b2b29c089db5e9029d9f03138a4f03/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:23 np0005540825 podman[82474]: 2025-12-01 09:48:23.012843488 +0000 UTC m=+0.028776730 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:48:23 np0005540825 podman[82476]: 2025-12-01 09:48:23.012729085 +0000 UTC m=+0.021828412 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:48:23 np0005540825 podman[82476]: 2025-12-01 09:48:23.116209017 +0000 UTC m=+0.125308334 container init 943dc7d348082da6cbb4328bdff94a715352186045f3fd17a9a1dc684f409055 (image=quay.io/ceph/ceph:v19, name=stupefied_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:48:23 np0005540825 podman[82474]: 2025-12-01 09:48:23.11669237 +0000 UTC m=+0.132625572 container start 86a147e60a461813f7ff22de96f2c4c23c1a33b633f084a81847c9c11e68ae49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-osd-1-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  1 04:48:23 np0005540825 podman[82474]: 2025-12-01 09:48:23.120829382 +0000 UTC m=+0.136762594 container attach 86a147e60a461813f7ff22de96f2c4c23c1a33b633f084a81847c9c11e68ae49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True)
Dec  1 04:48:23 np0005540825 podman[82476]: 2025-12-01 09:48:23.130154974 +0000 UTC m=+0.139254291 container start 943dc7d348082da6cbb4328bdff94a715352186045f3fd17a9a1dc684f409055 (image=quay.io/ceph/ceph:v19, name=stupefied_archimedes, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:48:23 np0005540825 podman[82476]: 2025-12-01 09:48:23.134201014 +0000 UTC m=+0.143300341 container attach 943dc7d348082da6cbb4328bdff94a715352186045f3fd17a9a1dc684f409055 (image=quay.io/ceph/ceph:v19, name=stupefied_archimedes, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  1 04:48:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-osd-1-activate[82503]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  1 04:48:23 np0005540825 bash[82474]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  1 04:48:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-osd-1-activate[82503]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  1 04:48:23 np0005540825 bash[82474]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  1 04:48:23 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec  1 04:48:23 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/988043334' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  1 04:48:23 np0005540825 stupefied_archimedes[82506]: 
Dec  1 04:48:23 np0005540825 stupefied_archimedes[82506]: {"fsid":"365f19c2-81e5-5edd-b6b4-280555214d3a","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":90,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":5,"num_osds":2,"num_up_osds":0,"osd_up_since":0,"num_in_osds":2,"osd_in_since":1764582494,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-12-01T09:46:50:475394+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-01T09:48:14.037975+0000","services":{}},"progress_events":{}}
Dec  1 04:48:23 np0005540825 systemd[1]: libpod-943dc7d348082da6cbb4328bdff94a715352186045f3fd17a9a1dc684f409055.scope: Deactivated successfully.
Dec  1 04:48:23 np0005540825 conmon[82506]: conmon 943dc7d348082da6cbb4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-943dc7d348082da6cbb4328bdff94a715352186045f3fd17a9a1dc684f409055.scope/container/memory.events
Dec  1 04:48:23 np0005540825 podman[82476]: 2025-12-01 09:48:23.57949897 +0000 UTC m=+0.588598277 container died 943dc7d348082da6cbb4328bdff94a715352186045f3fd17a9a1dc684f409055 (image=quay.io/ceph/ceph:v19, name=stupefied_archimedes, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  1 04:48:23 np0005540825 systemd[1]: var-lib-containers-storage-overlay-fd6af6b78d31a4c473891da3d78d58b414b2b29c089db5e9029d9f03138a4f03-merged.mount: Deactivated successfully.
Dec  1 04:48:23 np0005540825 podman[82476]: 2025-12-01 09:48:23.618398113 +0000 UTC m=+0.627497420 container remove 943dc7d348082da6cbb4328bdff94a715352186045f3fd17a9a1dc684f409055 (image=quay.io/ceph/ceph:v19, name=stupefied_archimedes, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325)
Dec  1 04:48:23 np0005540825 systemd[1]: libpod-conmon-943dc7d348082da6cbb4328bdff94a715352186045f3fd17a9a1dc684f409055.scope: Deactivated successfully.
Dec  1 04:48:23 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  1 04:48:23 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:23 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  1 04:48:23 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:23 np0005540825 lvm[82619]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 04:48:23 np0005540825 lvm[82619]: VG ceph_vg0 finished
Dec  1 04:48:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-osd-1-activate[82503]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec  1 04:48:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-osd-1-activate[82503]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  1 04:48:23 np0005540825 bash[82474]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec  1 04:48:23 np0005540825 bash[82474]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  1 04:48:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-osd-1-activate[82503]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  1 04:48:23 np0005540825 bash[82474]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  1 04:48:24 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  1 04:48:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-osd-1-activate[82503]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  1 04:48:24 np0005540825 bash[82474]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  1 04:48:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-osd-1-activate[82503]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec  1 04:48:24 np0005540825 bash[82474]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec  1 04:48:24 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:24 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-osd-1-activate[82503]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec  1 04:48:24 np0005540825 bash[82474]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec  1 04:48:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-osd-1-activate[82503]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec  1 04:48:24 np0005540825 bash[82474]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec  1 04:48:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-osd-1-activate[82503]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  1 04:48:24 np0005540825 bash[82474]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  1 04:48:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-osd-1-activate[82503]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  1 04:48:24 np0005540825 bash[82474]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  1 04:48:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-osd-1-activate[82503]: --> ceph-volume lvm activate successful for osd ID: 1
Dec  1 04:48:24 np0005540825 bash[82474]: --> ceph-volume lvm activate successful for osd ID: 1
Dec  1 04:48:24 np0005540825 systemd[1]: libpod-86a147e60a461813f7ff22de96f2c4c23c1a33b633f084a81847c9c11e68ae49.scope: Deactivated successfully.
Dec  1 04:48:24 np0005540825 systemd[1]: libpod-86a147e60a461813f7ff22de96f2c4c23c1a33b633f084a81847c9c11e68ae49.scope: Consumed 1.451s CPU time.
Dec  1 04:48:24 np0005540825 podman[82474]: 2025-12-01 09:48:24.453257566 +0000 UTC m=+1.469190778 container died 86a147e60a461813f7ff22de96f2c4c23c1a33b633f084a81847c9c11e68ae49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:48:24 np0005540825 systemd[1]: var-lib-containers-storage-overlay-00cc8206f0010c700a33b88cd4c2c5e5264174cfa93e0c7aab73b59a8f0a6955-merged.mount: Deactivated successfully.
Dec  1 04:48:24 np0005540825 podman[82474]: 2025-12-01 09:48:24.498587273 +0000 UTC m=+1.514520525 container remove 86a147e60a461813f7ff22de96f2c4c23c1a33b633f084a81847c9c11e68ae49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-osd-1-activate, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  1 04:48:24 np0005540825 podman[82790]: 2025-12-01 09:48:24.739491675 +0000 UTC m=+0.053349455 container create 33e73e8956402fc551735994bc8e4e9443da8b888c66db2c52f2b7aa0edf3d02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-osd-1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  1 04:48:24 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/230dffc6939351dc2d45d9e79114050441dd3d64b1a1bb3003559e407b2ef289/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:24 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/230dffc6939351dc2d45d9e79114050441dd3d64b1a1bb3003559e407b2ef289/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:24 np0005540825 podman[82790]: 2025-12-01 09:48:24.71233176 +0000 UTC m=+0.026189580 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:48:24 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/230dffc6939351dc2d45d9e79114050441dd3d64b1a1bb3003559e407b2ef289/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:24 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/230dffc6939351dc2d45d9e79114050441dd3d64b1a1bb3003559e407b2ef289/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:24 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/230dffc6939351dc2d45d9e79114050441dd3d64b1a1bb3003559e407b2ef289/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:24 np0005540825 podman[82790]: 2025-12-01 09:48:24.831147996 +0000 UTC m=+0.145005766 container init 33e73e8956402fc551735994bc8e4e9443da8b888c66db2c52f2b7aa0edf3d02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-osd-1, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:48:24 np0005540825 podman[82790]: 2025-12-01 09:48:24.849088062 +0000 UTC m=+0.162945842 container start 33e73e8956402fc551735994bc8e4e9443da8b888c66db2c52f2b7aa0edf3d02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-osd-1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:48:24 np0005540825 bash[82790]: 33e73e8956402fc551735994bc8e4e9443da8b888c66db2c52f2b7aa0edf3d02
Dec  1 04:48:24 np0005540825 systemd[1]: Started Ceph osd.1 for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 04:48:24 np0005540825 ceph-osd[82809]: set uid:gid to 167:167 (ceph:ceph)
Dec  1 04:48:24 np0005540825 ceph-osd[82809]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Dec  1 04:48:24 np0005540825 ceph-osd[82809]: pidfile_write: ignore empty --pid-file
Dec  1 04:48:24 np0005540825 ceph-osd[82809]: bdev(0x55ea02317800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  1 04:48:24 np0005540825 ceph-osd[82809]: bdev(0x55ea02317800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  1 04:48:24 np0005540825 ceph-osd[82809]: bdev(0x55ea02317800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  1 04:48:24 np0005540825 ceph-osd[82809]: bdev(0x55ea02317800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  1 04:48:24 np0005540825 ceph-osd[82809]: bdev(0x55ea02317800 /var/lib/ceph/osd/ceph-1/block) close
Dec  1 04:48:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:48:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:48:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:25 np0005540825 ceph-osd[82809]: bdev(0x55ea02317800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  1 04:48:25 np0005540825 ceph-osd[82809]: bdev(0x55ea02317800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  1 04:48:25 np0005540825 ceph-osd[82809]: bdev(0x55ea02317800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  1 04:48:25 np0005540825 ceph-osd[82809]: bdev(0x55ea02317800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  1 04:48:25 np0005540825 ceph-osd[82809]: bdev(0x55ea02317800 /var/lib/ceph/osd/ceph-1/block) close
Dec  1 04:48:25 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:25 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:25 np0005540825 ceph-osd[82809]: bdev(0x55ea02317800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  1 04:48:25 np0005540825 ceph-osd[82809]: bdev(0x55ea02317800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  1 04:48:25 np0005540825 ceph-osd[82809]: bdev(0x55ea02317800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  1 04:48:25 np0005540825 ceph-osd[82809]: bdev(0x55ea02317800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  1 04:48:25 np0005540825 ceph-osd[82809]: bdev(0x55ea02317800 /var/lib/ceph/osd/ceph-1/block) close
Dec  1 04:48:25 np0005540825 ceph-osd[82809]: bdev(0x55ea02317800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  1 04:48:25 np0005540825 ceph-osd[82809]: bdev(0x55ea02317800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  1 04:48:25 np0005540825 ceph-osd[82809]: bdev(0x55ea02317800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  1 04:48:25 np0005540825 ceph-osd[82809]: bdev(0x55ea02317800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  1 04:48:25 np0005540825 ceph-osd[82809]: bdev(0x55ea02317800 /var/lib/ceph/osd/ceph-1/block) close
Dec  1 04:48:25 np0005540825 ceph-osd[82809]: bdev(0x55ea02317800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  1 04:48:25 np0005540825 ceph-osd[82809]: bdev(0x55ea02317800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  1 04:48:25 np0005540825 ceph-osd[82809]: bdev(0x55ea02317800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  1 04:48:25 np0005540825 ceph-osd[82809]: bdev(0x55ea02317800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  1 04:48:25 np0005540825 ceph-osd[82809]: bdev(0x55ea02317800 /var/lib/ceph/osd/ceph-1/block) close
Dec  1 04:48:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  1 04:48:25 np0005540825 ceph-osd[82809]: bdev(0x55ea02317800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  1 04:48:25 np0005540825 ceph-osd[82809]: bdev(0x55ea02317800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  1 04:48:25 np0005540825 ceph-osd[82809]: bdev(0x55ea02317800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  1 04:48:25 np0005540825 ceph-osd[82809]: bdev(0x55ea02317800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  1 04:48:25 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  1 04:48:25 np0005540825 ceph-osd[82809]: bdev(0x55ea02317c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  1 04:48:25 np0005540825 ceph-osd[82809]: bdev(0x55ea02317c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  1 04:48:25 np0005540825 ceph-osd[82809]: bdev(0x55ea02317c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  1 04:48:25 np0005540825 ceph-osd[82809]: bdev(0x55ea02317c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  1 04:48:25 np0005540825 ceph-osd[82809]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec  1 04:48:25 np0005540825 ceph-osd[82809]: bdev(0x55ea02317c00 /var/lib/ceph/osd/ceph-1/block) close
Dec  1 04:48:25 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  1 04:48:25 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:25 np0005540825 podman[82919]: 2025-12-01 09:48:25.574494731 +0000 UTC m=+0.049910032 container create bfeb0941e9304d0bf22392cffecbdf73391e7b33d0a87ed61c8234d74cdf4583 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  1 04:48:25 np0005540825 systemd[1]: Started libpod-conmon-bfeb0941e9304d0bf22392cffecbdf73391e7b33d0a87ed61c8234d74cdf4583.scope.
Dec  1 04:48:25 np0005540825 podman[82919]: 2025-12-01 09:48:25.549974698 +0000 UTC m=+0.025390019 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:48:25 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:48:25 np0005540825 podman[82919]: 2025-12-01 09:48:25.690084861 +0000 UTC m=+0.165500212 container init bfeb0941e9304d0bf22392cffecbdf73391e7b33d0a87ed61c8234d74cdf4583 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_hertz, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:48:25 np0005540825 podman[82919]: 2025-12-01 09:48:25.699135146 +0000 UTC m=+0.174550457 container start bfeb0941e9304d0bf22392cffecbdf73391e7b33d0a87ed61c8234d74cdf4583 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:48:25 np0005540825 podman[82919]: 2025-12-01 09:48:25.702942659 +0000 UTC m=+0.178358000 container attach bfeb0941e9304d0bf22392cffecbdf73391e7b33d0a87ed61c8234d74cdf4583 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  1 04:48:25 np0005540825 crazy_hertz[82942]: 167 167
Dec  1 04:48:25 np0005540825 systemd[1]: libpod-bfeb0941e9304d0bf22392cffecbdf73391e7b33d0a87ed61c8234d74cdf4583.scope: Deactivated successfully.
Dec  1 04:48:25 np0005540825 podman[82919]: 2025-12-01 09:48:25.707683827 +0000 UTC m=+0.183099128 container died bfeb0941e9304d0bf22392cffecbdf73391e7b33d0a87ed61c8234d74cdf4583 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec  1 04:48:25 np0005540825 systemd[1]: var-lib-containers-storage-overlay-5ec918f293adc5725c29a70794748948fc0cb83ab9a348b14e82f6350f818769-merged.mount: Deactivated successfully.
Dec  1 04:48:25 np0005540825 podman[82919]: 2025-12-01 09:48:25.748726069 +0000 UTC m=+0.224141370 container remove bfeb0941e9304d0bf22392cffecbdf73391e7b33d0a87ed61c8234d74cdf4583 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  1 04:48:25 np0005540825 systemd[1]: libpod-conmon-bfeb0941e9304d0bf22392cffecbdf73391e7b33d0a87ed61c8234d74cdf4583.scope: Deactivated successfully.
Dec  1 04:48:25 np0005540825 ceph-osd[82809]: bdev(0x55ea02317800 /var/lib/ceph/osd/ceph-1/block) close
Dec  1 04:48:25 np0005540825 podman[82967]: 2025-12-01 09:48:25.936637126 +0000 UTC m=+0.053146410 container create f45fca72cb8e1ae28734d314a1f838195746214c094949e1c4d877bb15023b52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_booth, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:48:25 np0005540825 systemd[1]: Started libpod-conmon-f45fca72cb8e1ae28734d314a1f838195746214c094949e1c4d877bb15023b52.scope.
Dec  1 04:48:26 np0005540825 podman[82967]: 2025-12-01 09:48:25.907125337 +0000 UTC m=+0.023634681 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:48:26 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:48:26 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b30cb38fe1ef157da62fa7862bf9145a661ee8765caf38344190a285f9be766d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:26 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b30cb38fe1ef157da62fa7862bf9145a661ee8765caf38344190a285f9be766d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:26 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b30cb38fe1ef157da62fa7862bf9145a661ee8765caf38344190a285f9be766d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:26 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b30cb38fe1ef157da62fa7862bf9145a661ee8765caf38344190a285f9be766d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:26 np0005540825 podman[82967]: 2025-12-01 09:48:26.039064789 +0000 UTC m=+0.155574093 container init f45fca72cb8e1ae28734d314a1f838195746214c094949e1c4d877bb15023b52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_booth, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:48:26 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  1 04:48:26 np0005540825 podman[82967]: 2025-12-01 09:48:26.046092169 +0000 UTC m=+0.162601463 container start f45fca72cb8e1ae28734d314a1f838195746214c094949e1c4d877bb15023b52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_booth, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  1 04:48:26 np0005540825 podman[82967]: 2025-12-01 09:48:26.050204491 +0000 UTC m=+0.166713765 container attach f45fca72cb8e1ae28734d314a1f838195746214c094949e1c4d877bb15023b52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_booth, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec  1 04:48:26 np0005540825 ceph-osd[82809]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Dec  1 04:48:26 np0005540825 ceph-osd[82809]: load: jerasure load: lrc 
Dec  1 04:48:26 np0005540825 ceph-osd[82809]: bdev(0x55ea031bcc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  1 04:48:26 np0005540825 ceph-osd[82809]: bdev(0x55ea031bcc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  1 04:48:26 np0005540825 ceph-osd[82809]: bdev(0x55ea031bcc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  1 04:48:26 np0005540825 ceph-osd[82809]: bdev(0x55ea031bcc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  1 04:48:26 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  1 04:48:26 np0005540825 ceph-osd[82809]: bdev(0x55ea031bcc00 /var/lib/ceph/osd/ceph-1/block) close
Dec  1 04:48:26 np0005540825 ceph-osd[82809]: bdev(0x55ea031bcc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  1 04:48:26 np0005540825 ceph-osd[82809]: bdev(0x55ea031bcc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  1 04:48:26 np0005540825 ceph-osd[82809]: bdev(0x55ea031bcc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  1 04:48:26 np0005540825 ceph-osd[82809]: bdev(0x55ea031bcc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  1 04:48:26 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  1 04:48:26 np0005540825 ceph-osd[82809]: bdev(0x55ea031bcc00 /var/lib/ceph/osd/ceph-1/block) close
Dec  1 04:48:26 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:26 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:26 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Dec  1 04:48:26 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/3734025374,v1:192.168.122.101:6801/3734025374]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Dec  1 04:48:26 np0005540825 ceph-osd[82809]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec  1 04:48:26 np0005540825 ceph-osd[82809]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec  1 04:48:26 np0005540825 ceph-osd[82809]: bdev(0x55ea031bcc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  1 04:48:26 np0005540825 ceph-osd[82809]: bdev(0x55ea031bcc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  1 04:48:26 np0005540825 ceph-osd[82809]: bdev(0x55ea031bcc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  1 04:48:26 np0005540825 ceph-osd[82809]: bdev(0x55ea031bcc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  1 04:48:26 np0005540825 ceph-osd[82809]: bdev(0x55ea031bcc00 /var/lib/ceph/osd/ceph-1/block) close
Dec  1 04:48:26 np0005540825 ceph-osd[82809]: bdev(0x55ea031bcc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  1 04:48:26 np0005540825 ceph-osd[82809]: bdev(0x55ea031bcc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  1 04:48:26 np0005540825 ceph-osd[82809]: bdev(0x55ea031bcc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  1 04:48:26 np0005540825 ceph-osd[82809]: bdev(0x55ea031bcc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  1 04:48:26 np0005540825 ceph-osd[82809]: bdev(0x55ea031bcc00 /var/lib/ceph/osd/ceph-1/block) close
Dec  1 04:48:26 np0005540825 lvm[83076]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 04:48:26 np0005540825 lvm[83076]: VG ceph_vg0 finished
Dec  1 04:48:26 np0005540825 nifty_booth[82983]: {}
Dec  1 04:48:26 np0005540825 systemd[1]: libpod-f45fca72cb8e1ae28734d314a1f838195746214c094949e1c4d877bb15023b52.scope: Deactivated successfully.
Dec  1 04:48:26 np0005540825 systemd[1]: libpod-f45fca72cb8e1ae28734d314a1f838195746214c094949e1c4d877bb15023b52.scope: Consumed 1.326s CPU time.
Dec  1 04:48:26 np0005540825 podman[82967]: 2025-12-01 09:48:26.894855269 +0000 UTC m=+1.011364533 container died f45fca72cb8e1ae28734d314a1f838195746214c094949e1c4d877bb15023b52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_booth, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec  1 04:48:26 np0005540825 systemd[1]: var-lib-containers-storage-overlay-b30cb38fe1ef157da62fa7862bf9145a661ee8765caf38344190a285f9be766d-merged.mount: Deactivated successfully.
Dec  1 04:48:26 np0005540825 podman[82967]: 2025-12-01 09:48:26.942494159 +0000 UTC m=+1.059003433 container remove f45fca72cb8e1ae28734d314a1f838195746214c094949e1c4d877bb15023b52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_booth, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:48:26 np0005540825 systemd[1]: libpod-conmon-f45fca72cb8e1ae28734d314a1f838195746214c094949e1c4d877bb15023b52.scope: Deactivated successfully.
Dec  1 04:48:26 np0005540825 ceph-osd[82809]: bdev(0x55ea031bcc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  1 04:48:26 np0005540825 ceph-osd[82809]: bdev(0x55ea031bcc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  1 04:48:26 np0005540825 ceph-osd[82809]: bdev(0x55ea031bcc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  1 04:48:26 np0005540825 ceph-osd[82809]: bdev(0x55ea031bcc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  1 04:48:26 np0005540825 ceph-osd[82809]: bdev(0x55ea031bcc00 /var/lib/ceph/osd/ceph-1/block) close
Dec  1 04:48:26 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:48:26 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:26 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:48:26 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bdev(0x55ea031bcc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bdev(0x55ea031bcc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bdev(0x55ea031bcc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bdev(0x55ea031bcc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bdev(0x55ea031bd000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bdev(0x55ea031bd000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bdev(0x55ea031bd000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bdev(0x55ea031bd000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bluefs mount
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bluefs mount shared_bdev_used = 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: RocksDB version: 7.9.2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Git sha 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Compile date 2025-07-17 03:12:14
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: DB SUMMARY
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: DB Session ID:  RKHGSQOEN6GVZSIWC8BA
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: CURRENT file:  CURRENT
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: IDENTITY file:  IDENTITY
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                         Options.error_if_exists: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                       Options.create_if_missing: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                         Options.paranoid_checks: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                                     Options.env: 0x55ea0318ddc0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                                Options.info_log: 0x55ea031917a0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.max_file_opening_threads: 16
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                              Options.statistics: (nil)
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                               Options.use_fsync: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                       Options.max_log_file_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                         Options.allow_fallocate: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.use_direct_reads: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.create_missing_column_families: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                              Options.db_log_dir: 
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                                 Options.wal_dir: db.wal
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.advise_random_on_open: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.write_buffer_manager: 0x55ea03288a00
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                            Options.rate_limiter: (nil)
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.unordered_write: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                               Options.row_cache: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                              Options.wal_filter: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.allow_ingest_behind: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.two_write_queues: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.manual_wal_flush: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.wal_compression: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.atomic_flush: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                 Options.log_readahead_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.allow_data_in_errors: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.db_host_id: __hostname__
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.max_background_jobs: 4
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.max_background_compactions: -1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.max_subcompactions: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.max_open_files: -1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.bytes_per_sync: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.max_background_flushes: -1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Compression algorithms supported:
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: #011kZSTD supported: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: #011kXpressCompression supported: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: #011kBZip2Compression supported: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: #011kLZ4Compression supported: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: #011kZlibCompression supported: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: #011kSnappyCompression supported: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.compaction_filter: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.compaction_filter_factory: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.sst_partitioner_factory: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ea03191b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ea023ad350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.write_buffer_size: 16777216
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.max_write_buffer_number: 64
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.compression: LZ4
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:       Options.prefix_extractor: nullptr
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.num_levels: 7
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.compression_opts.level: 32767
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.compression_opts.strategy: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.compression_opts.enabled: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.arena_block_size: 1048576
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.disable_auto_compactions: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.inplace_update_support: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                           Options.bloom_locality: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.max_successive_merges: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.paranoid_file_checks: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.force_consistency_checks: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.report_bg_io_stats: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                               Options.ttl: 2592000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                       Options.enable_blob_files: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                           Options.min_blob_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.blob_file_size: 268435456
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.blob_file_starting_level: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:           Options.merge_operator: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.compaction_filter: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.compaction_filter_factory: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.sst_partitioner_factory: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ea03191b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ea023ad350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.write_buffer_size: 16777216
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.max_write_buffer_number: 64
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.compression: LZ4
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:       Options.prefix_extractor: nullptr
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.num_levels: 7
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.compression_opts.level: 32767
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.compression_opts.strategy: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.compression_opts.enabled: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.arena_block_size: 1048576
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.disable_auto_compactions: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.inplace_update_support: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                           Options.bloom_locality: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.max_successive_merges: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.paranoid_file_checks: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.force_consistency_checks: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.report_bg_io_stats: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                               Options.ttl: 2592000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                       Options.enable_blob_files: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                           Options.min_blob_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.blob_file_size: 268435456
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.blob_file_starting_level: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:           Options.merge_operator: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.compaction_filter: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.compaction_filter_factory: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.sst_partitioner_factory: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ea03191b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ea023ad350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.write_buffer_size: 16777216
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.max_write_buffer_number: 64
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.compression: LZ4
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:       Options.prefix_extractor: nullptr
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.num_levels: 7
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.compression_opts.level: 32767
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.compression_opts.strategy: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.compression_opts.enabled: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.arena_block_size: 1048576
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.disable_auto_compactions: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.inplace_update_support: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                           Options.bloom_locality: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.max_successive_merges: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.paranoid_file_checks: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.force_consistency_checks: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.report_bg_io_stats: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                               Options.ttl: 2592000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                       Options.enable_blob_files: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                           Options.min_blob_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.blob_file_size: 268435456
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.blob_file_starting_level: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:           Options.merge_operator: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.compaction_filter: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.compaction_filter_factory: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.sst_partitioner_factory: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ea03191b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ea023ad350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.write_buffer_size: 16777216
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.max_write_buffer_number: 64
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.compression: LZ4
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:       Options.prefix_extractor: nullptr
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.num_levels: 7
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.compression_opts.level: 32767
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.compression_opts.strategy: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.compression_opts.enabled: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.arena_block_size: 1048576
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.disable_auto_compactions: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.inplace_update_support: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                           Options.bloom_locality: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.max_successive_merges: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.paranoid_file_checks: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.force_consistency_checks: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.report_bg_io_stats: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                               Options.ttl: 2592000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                       Options.enable_blob_files: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                           Options.min_blob_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.blob_file_size: 268435456
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.blob_file_starting_level: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:           Options.merge_operator: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.compaction_filter: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.compaction_filter_factory: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.sst_partitioner_factory: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ea03191b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ea023ad350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.write_buffer_size: 16777216
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.max_write_buffer_number: 64
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.compression: LZ4
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:       Options.prefix_extractor: nullptr
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.num_levels: 7
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.compression_opts.level: 32767
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.compression_opts.strategy: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.compression_opts.enabled: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.arena_block_size: 1048576
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.disable_auto_compactions: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.inplace_update_support: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                           Options.bloom_locality: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.max_successive_merges: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.paranoid_file_checks: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.force_consistency_checks: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.report_bg_io_stats: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                               Options.ttl: 2592000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                       Options.enable_blob_files: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                           Options.min_blob_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.blob_file_size: 268435456
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.blob_file_starting_level: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:           Options.merge_operator: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.compaction_filter: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.compaction_filter_factory: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.sst_partitioner_factory: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ea03191b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ea023ad350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.write_buffer_size: 16777216
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.max_write_buffer_number: 64
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.compression: LZ4
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:       Options.prefix_extractor: nullptr
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.num_levels: 7
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.compression_opts.level: 32767
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.compression_opts.strategy: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.compression_opts.enabled: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.arena_block_size: 1048576
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.disable_auto_compactions: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.inplace_update_support: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                           Options.bloom_locality: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.max_successive_merges: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.paranoid_file_checks: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.force_consistency_checks: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.report_bg_io_stats: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                               Options.ttl: 2592000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                       Options.enable_blob_files: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                           Options.min_blob_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.blob_file_size: 268435456
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.blob_file_starting_level: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:           Options.merge_operator: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.compaction_filter: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.compaction_filter_factory: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.sst_partitioner_factory: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ea03191b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ea023ad350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.write_buffer_size: 16777216
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.max_write_buffer_number: 64
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.compression: LZ4
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:       Options.prefix_extractor: nullptr
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.num_levels: 7
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.compression_opts.level: 32767
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.compression_opts.strategy: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.compression_opts.enabled: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.arena_block_size: 1048576
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.disable_auto_compactions: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.inplace_update_support: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                           Options.bloom_locality: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.max_successive_merges: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.paranoid_file_checks: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.force_consistency_checks: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.report_bg_io_stats: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                               Options.ttl: 2592000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                       Options.enable_blob_files: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                           Options.min_blob_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.blob_file_size: 268435456
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.blob_file_starting_level: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:           Options.merge_operator: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.compaction_filter: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.compaction_filter_factory: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.sst_partitioner_factory: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ea03191b80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ea023ac9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.write_buffer_size: 16777216
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.max_write_buffer_number: 64
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.compression: LZ4
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:       Options.prefix_extractor: nullptr
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.num_levels: 7
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.compression_opts.level: 32767
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.compression_opts.strategy: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.compression_opts.enabled: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.arena_block_size: 1048576
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.disable_auto_compactions: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.inplace_update_support: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                           Options.bloom_locality: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.max_successive_merges: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.paranoid_file_checks: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.force_consistency_checks: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.report_bg_io_stats: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                               Options.ttl: 2592000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                       Options.enable_blob_files: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                           Options.min_blob_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.blob_file_size: 268435456
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.blob_file_starting_level: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:           Options.merge_operator: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.compaction_filter: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.compaction_filter_factory: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.sst_partitioner_factory: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ea03191b80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ea023ac9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.write_buffer_size: 16777216
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.max_write_buffer_number: 64
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.compression: LZ4
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:       Options.prefix_extractor: nullptr
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.num_levels: 7
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.compression_opts.level: 32767
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.compression_opts.strategy: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.compression_opts.enabled: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.arena_block_size: 1048576
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.disable_auto_compactions: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.inplace_update_support: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                           Options.bloom_locality: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.max_successive_merges: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.paranoid_file_checks: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.force_consistency_checks: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.report_bg_io_stats: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                               Options.ttl: 2592000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                       Options.enable_blob_files: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                           Options.min_blob_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.blob_file_size: 268435456
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.blob_file_starting_level: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:           Options.merge_operator: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.compaction_filter: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.compaction_filter_factory: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.sst_partitioner_factory: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ea03191b80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ea023ac9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.write_buffer_size: 16777216
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.max_write_buffer_number: 64
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.compression: LZ4
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:       Options.prefix_extractor: nullptr
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.num_levels: 7
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.compression_opts.level: 32767
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.compression_opts.strategy: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.compression_opts.enabled: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.arena_block_size: 1048576
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.disable_auto_compactions: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.inplace_update_support: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                           Options.bloom_locality: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.max_successive_merges: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.paranoid_file_checks: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.force_consistency_checks: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.report_bg_io_stats: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                               Options.ttl: 2592000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                       Options.enable_blob_files: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                           Options.min_blob_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.blob_file_size: 268435456
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.blob_file_starting_level: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 07b16392-40ac-411f-811b-ecfc23df38e3
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764582507253744, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764582507253946, "job": 1, "event": "recovery_finished"}
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: freelist init
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: freelist _read_cfg
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bluefs umount
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bdev(0x55ea031bd000 /var/lib/ceph/osd/ceph-1/block) close
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bdev(0x55ea031bd000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bdev(0x55ea031bd000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bdev(0x55ea031bd000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bdev(0x55ea031bd000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bluefs mount
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bluefs mount shared_bdev_used = 4718592
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: RocksDB version: 7.9.2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Git sha 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Compile date 2025-07-17 03:12:14
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: DB SUMMARY
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: DB Session ID:  RKHGSQOEN6GVZSIWC8BB
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: CURRENT file:  CURRENT
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: IDENTITY file:  IDENTITY
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                         Options.error_if_exists: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                       Options.create_if_missing: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                         Options.paranoid_checks: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                                     Options.env: 0x55ea0332c2a0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                                Options.info_log: 0x55ea03191920
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.max_file_opening_threads: 16
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                              Options.statistics: (nil)
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                               Options.use_fsync: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                       Options.max_log_file_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                         Options.allow_fallocate: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.use_direct_reads: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.create_missing_column_families: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                              Options.db_log_dir: 
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                                 Options.wal_dir: db.wal
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.advise_random_on_open: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.write_buffer_manager: 0x55ea03288a00
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                            Options.rate_limiter: (nil)
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.unordered_write: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                               Options.row_cache: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                              Options.wal_filter: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.allow_ingest_behind: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.two_write_queues: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.manual_wal_flush: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.wal_compression: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.atomic_flush: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                 Options.log_readahead_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.allow_data_in_errors: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.db_host_id: __hostname__
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.max_background_jobs: 4
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.max_background_compactions: -1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.max_subcompactions: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.max_open_files: -1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.bytes_per_sync: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.max_background_flushes: -1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Compression algorithms supported:
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: #011kZSTD supported: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: #011kXpressCompression supported: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: #011kBZip2Compression supported: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: #011kLZ4Compression supported: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: #011kZlibCompression supported: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: #011kSnappyCompression supported: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.compaction_filter: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.compaction_filter_factory: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.sst_partitioner_factory: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ea03191680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ea023ad350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.write_buffer_size: 16777216
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.max_write_buffer_number: 64
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.compression: LZ4
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:       Options.prefix_extractor: nullptr
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.num_levels: 7
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.compression_opts.level: 32767
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.compression_opts.strategy: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.compression_opts.enabled: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.arena_block_size: 1048576
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.disable_auto_compactions: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.inplace_update_support: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                           Options.bloom_locality: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.max_successive_merges: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.paranoid_file_checks: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.force_consistency_checks: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.report_bg_io_stats: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                               Options.ttl: 2592000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                       Options.enable_blob_files: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                           Options.min_blob_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.blob_file_size: 268435456
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.blob_file_starting_level: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:           Options.merge_operator: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.compaction_filter: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.compaction_filter_factory: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.sst_partitioner_factory: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ea03191680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ea023ad350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.write_buffer_size: 16777216
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.max_write_buffer_number: 64
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.compression: LZ4
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:       Options.prefix_extractor: nullptr
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.num_levels: 7
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.compression_opts.level: 32767
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.compression_opts.strategy: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.compression_opts.enabled: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.arena_block_size: 1048576
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.disable_auto_compactions: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.inplace_update_support: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                           Options.bloom_locality: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.max_successive_merges: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.paranoid_file_checks: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.force_consistency_checks: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.report_bg_io_stats: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                               Options.ttl: 2592000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                       Options.enable_blob_files: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                           Options.min_blob_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.blob_file_size: 268435456
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.blob_file_starting_level: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:           Options.merge_operator: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.compaction_filter: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.compaction_filter_factory: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.sst_partitioner_factory: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ea03191680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ea023ad350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.write_buffer_size: 16777216
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.max_write_buffer_number: 64
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.compression: LZ4
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:       Options.prefix_extractor: nullptr
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.num_levels: 7
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.compression_opts.level: 32767
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.compression_opts.strategy: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.compression_opts.enabled: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.arena_block_size: 1048576
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.disable_auto_compactions: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.inplace_update_support: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                           Options.bloom_locality: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.max_successive_merges: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.paranoid_file_checks: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.force_consistency_checks: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.report_bg_io_stats: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                               Options.ttl: 2592000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                       Options.enable_blob_files: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                           Options.min_blob_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.blob_file_size: 268435456
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.blob_file_starting_level: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:           Options.merge_operator: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.compaction_filter: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.compaction_filter_factory: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.sst_partitioner_factory: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ea03191680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ea023ad350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.write_buffer_size: 16777216
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.max_write_buffer_number: 64
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.compression: LZ4
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:       Options.prefix_extractor: nullptr
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.num_levels: 7
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.compression_opts.level: 32767
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.compression_opts.strategy: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.compression_opts.enabled: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.arena_block_size: 1048576
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.disable_auto_compactions: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.inplace_update_support: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                           Options.bloom_locality: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.max_successive_merges: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.paranoid_file_checks: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.force_consistency_checks: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.report_bg_io_stats: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                               Options.ttl: 2592000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                       Options.enable_blob_files: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                           Options.min_blob_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.blob_file_size: 268435456
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.blob_file_starting_level: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:           Options.merge_operator: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.compaction_filter: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.compaction_filter_factory: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.sst_partitioner_factory: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ea03191680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ea023ad350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.write_buffer_size: 16777216
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.max_write_buffer_number: 64
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.compression: LZ4
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:       Options.prefix_extractor: nullptr
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.num_levels: 7
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.compression_opts.level: 32767
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.compression_opts.strategy: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.compression_opts.enabled: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.arena_block_size: 1048576
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.disable_auto_compactions: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.inplace_update_support: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                           Options.bloom_locality: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.max_successive_merges: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.paranoid_file_checks: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.force_consistency_checks: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.report_bg_io_stats: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                               Options.ttl: 2592000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                       Options.enable_blob_files: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                           Options.min_blob_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.blob_file_size: 268435456
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.blob_file_starting_level: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:           Options.merge_operator: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.compaction_filter: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.compaction_filter_factory: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.sst_partitioner_factory: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ea03191680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ea023ad350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.write_buffer_size: 16777216
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.max_write_buffer_number: 64
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.compression: LZ4
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:       Options.prefix_extractor: nullptr
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.num_levels: 7
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.compression_opts.level: 32767
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.compression_opts.strategy: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.compression_opts.enabled: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.arena_block_size: 1048576
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.disable_auto_compactions: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.inplace_update_support: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                           Options.bloom_locality: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.max_successive_merges: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.paranoid_file_checks: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.force_consistency_checks: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.report_bg_io_stats: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                               Options.ttl: 2592000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                       Options.enable_blob_files: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                           Options.min_blob_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.blob_file_size: 268435456
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.blob_file_starting_level: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:           Options.merge_operator: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.compaction_filter: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.compaction_filter_factory: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.sst_partitioner_factory: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ea03191680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ea023ad350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.write_buffer_size: 16777216
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.max_write_buffer_number: 64
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.compression: LZ4
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:       Options.prefix_extractor: nullptr
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.num_levels: 7
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.compression_opts.level: 32767
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.compression_opts.strategy: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.compression_opts.enabled: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.arena_block_size: 1048576
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.disable_auto_compactions: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.inplace_update_support: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                           Options.bloom_locality: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.max_successive_merges: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.paranoid_file_checks: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.force_consistency_checks: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.report_bg_io_stats: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                               Options.ttl: 2592000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                       Options.enable_blob_files: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                           Options.min_blob_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.blob_file_size: 268435456
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.blob_file_starting_level: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:           Options.merge_operator: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.compaction_filter: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.compaction_filter_factory: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.sst_partitioner_factory: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ea03191ac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ea023ac9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.write_buffer_size: 16777216
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.max_write_buffer_number: 64
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.compression: LZ4
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:       Options.prefix_extractor: nullptr
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.num_levels: 7
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.compression_opts.level: 32767
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.compression_opts.strategy: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.compression_opts.enabled: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.arena_block_size: 1048576
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.disable_auto_compactions: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.inplace_update_support: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                           Options.bloom_locality: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.max_successive_merges: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.paranoid_file_checks: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.force_consistency_checks: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.report_bg_io_stats: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                               Options.ttl: 2592000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                       Options.enable_blob_files: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                           Options.min_blob_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.blob_file_size: 268435456
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.blob_file_starting_level: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:           Options.merge_operator: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.compaction_filter: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.compaction_filter_factory: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.sst_partitioner_factory: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ea03191ac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ea023ac9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.write_buffer_size: 16777216
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.max_write_buffer_number: 64
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.compression: LZ4
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:       Options.prefix_extractor: nullptr
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.num_levels: 7
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.compression_opts.level: 32767
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.compression_opts.strategy: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.compression_opts.enabled: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.arena_block_size: 1048576
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.disable_auto_compactions: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.inplace_update_support: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                           Options.bloom_locality: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.max_successive_merges: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.paranoid_file_checks: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.force_consistency_checks: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.report_bg_io_stats: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                               Options.ttl: 2592000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                       Options.enable_blob_files: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                           Options.min_blob_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.blob_file_size: 268435456
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.blob_file_starting_level: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:           Options.merge_operator: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.compaction_filter: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.compaction_filter_factory: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.sst_partitioner_factory: None
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ea03191ac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ea023ac9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.write_buffer_size: 16777216
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.max_write_buffer_number: 64
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.compression: LZ4
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:       Options.prefix_extractor: nullptr
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.num_levels: 7
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.compression_opts.level: 32767
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.compression_opts.strategy: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                  Options.compression_opts.enabled: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.arena_block_size: 1048576
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.disable_auto_compactions: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.inplace_update_support: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                           Options.bloom_locality: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                    Options.max_successive_merges: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.paranoid_file_checks: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.force_consistency_checks: 1
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.report_bg_io_stats: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                               Options.ttl: 2592000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                       Options.enable_blob_files: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                           Options.min_blob_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                          Options.blob_file_size: 268435456
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb:                Options.blob_file_starting_level: 0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 07b16392-40ac-411f-811b-ecfc23df38e3
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764582507520953, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764582507524224, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764582507, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07b16392-40ac-411f-811b-ecfc23df38e3", "db_session_id": "RKHGSQOEN6GVZSIWC8BB", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764582507526883, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764582507, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07b16392-40ac-411f-811b-ecfc23df38e3", "db_session_id": "RKHGSQOEN6GVZSIWC8BB", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764582507529503, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764582507, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07b16392-40ac-411f-811b-ecfc23df38e3", "db_session_id": "RKHGSQOEN6GVZSIWC8BB", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764582507530986, "job": 1, "event": "recovery_finished"}
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55ea0338e000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: DB pointer 0x55ea03338000
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ea023ad350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ea023ad350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ea023ad350#2 capacity: 460.80 MB usag
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: _get_class not permitted to load lua
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: _get_class not permitted to load sdk
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: osd.1 0 load_pgs
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: osd.1 0 load_pgs opened 0 pgs
Dec  1 04:48:27 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Dec  1 04:48:27 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  1 04:48:27 np0005540825 ceph-osd[82809]: osd.1 0 log_to_monitors true
Dec  1 04:48:27 np0005540825 ceph-mon[74416]: from='osd.0 [v2:192.168.122.101:6800/3734025374,v1:192.168.122.101:6801/3734025374]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Dec  1 04:48:27 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:27 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-osd-1[82805]: 2025-12-01T09:48:27.567+0000 7f49eea2e740 -1 osd.1 0 log_to_monitors true
Dec  1 04:48:27 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Dec  1 04:48:27 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/1037904125,v1:192.168.122.100:6803/1037904125]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Dec  1 04:48:27 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/3734025374,v1:192.168.122.101:6801/3734025374]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec  1 04:48:27 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Dec  1 04:48:27 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Dec  1 04:48:27 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]} v 0)
Dec  1 04:48:27 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/3734025374,v1:192.168.122.101:6801/3734025374]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Dec  1 04:48:27 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-1,root=default}
Dec  1 04:48:27 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  1 04:48:27 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  1 04:48:27 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  1 04:48:27 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  1 04:48:27 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  1 04:48:27 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  1 04:48:27 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  1 04:48:27 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:27 np0005540825 podman[83642]: 2025-12-01 09:48:27.947052076 +0000 UTC m=+0.073201703 container exec 04e54403a63b389bbbec1024baf07fe083822adf8debfd9c927a52a06f70b8a1 (image=quay.io/ceph/ceph:v19, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mon-compute-0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  1 04:48:27 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:48:28 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  1 04:48:28 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  1 04:48:28 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:28 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  1 04:48:28 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:28 np0005540825 podman[83642]: 2025-12-01 09:48:28.077831917 +0000 UTC m=+0.203981494 container exec_died 04e54403a63b389bbbec1024baf07fe083822adf8debfd9c927a52a06f70b8a1 (image=quay.io/ceph/ceph:v19, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mon-compute-0, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:48:28 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  1 04:48:28 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:28 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:48:28 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:28 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:48:28 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:28 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec  1 04:48:28 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec  1 04:48:28 np0005540825 ceph-mon[74416]: from='osd.1 [v2:192.168.122.100:6802/1037904125,v1:192.168.122.100:6803/1037904125]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Dec  1 04:48:28 np0005540825 ceph-mon[74416]: from='osd.0 [v2:192.168.122.101:6800/3734025374,v1:192.168.122.101:6801/3734025374]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec  1 04:48:28 np0005540825 ceph-mon[74416]: from='osd.0 [v2:192.168.122.101:6800/3734025374,v1:192.168.122.101:6801/3734025374]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Dec  1 04:48:28 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:28 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:28 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:28 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:28 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:28 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:28 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Dec  1 04:48:28 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  1 04:48:28 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/1037904125,v1:192.168.122.100:6803/1037904125]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec  1 04:48:28 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/3734025374,v1:192.168.122.101:6801/3734025374]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Dec  1 04:48:28 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Dec  1 04:48:28 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Dec  1 04:48:28 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Dec  1 04:48:28 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  1 04:48:28 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/1037904125,v1:192.168.122.100:6803/1037904125]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec  1 04:48:28 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Dec  1 04:48:28 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  1 04:48:28 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  1 04:48:28 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  1 04:48:28 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  1 04:48:28 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  1 04:48:28 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/3734025374; not ready for session (expect reconnect)
Dec  1 04:48:28 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  1 04:48:28 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  1 04:48:28 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  1 04:48:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  1 04:48:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Dec  1 04:48:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  1 04:48:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/1037904125,v1:192.168.122.100:6803/1037904125]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  1 04:48:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e8 e8: 2 total, 0 up, 2 in
Dec  1 04:48:29 np0005540825 ceph-osd[82809]: osd.1 0 done with init, starting boot process
Dec  1 04:48:29 np0005540825 ceph-osd[82809]: osd.1 0 start_boot
Dec  1 04:48:29 np0005540825 ceph-osd[82809]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec  1 04:48:29 np0005540825 ceph-osd[82809]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec  1 04:48:29 np0005540825 ceph-osd[82809]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec  1 04:48:29 np0005540825 ceph-osd[82809]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec  1 04:48:29 np0005540825 ceph-osd[82809]: osd.1 0  bench count 12288000 bsize 4 KiB
Dec  1 04:48:29 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 0 up, 2 in
Dec  1 04:48:29 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/3734025374; not ready for session (expect reconnect)
Dec  1 04:48:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  1 04:48:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  1 04:48:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  1 04:48:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  1 04:48:29 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  1 04:48:29 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  1 04:48:29 np0005540825 ceph-mon[74416]: from='osd.1 [v2:192.168.122.100:6802/1037904125,v1:192.168.122.100:6803/1037904125]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec  1 04:48:29 np0005540825 ceph-mon[74416]: from='osd.0 [v2:192.168.122.101:6800/3734025374,v1:192.168.122.101:6801/3734025374]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Dec  1 04:48:29 np0005540825 ceph-mon[74416]: from='osd.1 [v2:192.168.122.100:6802/1037904125,v1:192.168.122.100:6803/1037904125]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec  1 04:48:29 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:29 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/1037904125; not ready for session (expect reconnect)
Dec  1 04:48:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  1 04:48:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  1 04:48:29 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  1 04:48:29 np0005540825 podman[83894]: 2025-12-01 09:48:29.829567853 +0000 UTC m=+0.072335360 container create 6866639e60f284cc3751880c2c722ff53231e3cfcb42ca1afabe4e765748d9e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_ptolemy, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  1 04:48:29 np0005540825 podman[83894]: 2025-12-01 09:48:29.794810692 +0000 UTC m=+0.037578289 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:48:29 np0005540825 systemd[1]: Started libpod-conmon-6866639e60f284cc3751880c2c722ff53231e3cfcb42ca1afabe4e765748d9e1.scope.
Dec  1 04:48:29 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:48:29 np0005540825 podman[83894]: 2025-12-01 09:48:29.96502122 +0000 UTC m=+0.207788727 container init 6866639e60f284cc3751880c2c722ff53231e3cfcb42ca1afabe4e765748d9e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_ptolemy, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:48:29 np0005540825 podman[83894]: 2025-12-01 09:48:29.977807286 +0000 UTC m=+0.220574783 container start 6866639e60f284cc3751880c2c722ff53231e3cfcb42ca1afabe4e765748d9e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid)
Dec  1 04:48:29 np0005540825 loving_ptolemy[83910]: 167 167
Dec  1 04:48:29 np0005540825 systemd[1]: libpod-6866639e60f284cc3751880c2c722ff53231e3cfcb42ca1afabe4e765748d9e1.scope: Deactivated successfully.
Dec  1 04:48:29 np0005540825 podman[83894]: 2025-12-01 09:48:29.993069659 +0000 UTC m=+0.235837156 container attach 6866639e60f284cc3751880c2c722ff53231e3cfcb42ca1afabe4e765748d9e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_ptolemy, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:48:29 np0005540825 podman[83894]: 2025-12-01 09:48:29.994834097 +0000 UTC m=+0.237601644 container died 6866639e60f284cc3751880c2c722ff53231e3cfcb42ca1afabe4e765748d9e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_ptolemy, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:48:30 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  1 04:48:30 np0005540825 systemd[1]: var-lib-containers-storage-overlay-33b288f7705330aedcba4e0ce1866c9e9dc5a17a603bca80959a8752b656508e-merged.mount: Deactivated successfully.
Dec  1 04:48:30 np0005540825 podman[83894]: 2025-12-01 09:48:30.074055592 +0000 UTC m=+0.316823129 container remove 6866639e60f284cc3751880c2c722ff53231e3cfcb42ca1afabe4e765748d9e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_ptolemy, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  1 04:48:30 np0005540825 systemd[1]: libpod-conmon-6866639e60f284cc3751880c2c722ff53231e3cfcb42ca1afabe4e765748d9e1.scope: Deactivated successfully.
Dec  1 04:48:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  1 04:48:30 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  1 04:48:30 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  1 04:48:30 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  1 04:48:30 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Dec  1 04:48:30 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec  1 04:48:30 np0005540825 ceph-mgr[74709]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5248M
Dec  1 04:48:30 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5248M
Dec  1 04:48:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec  1 04:48:30 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:30 np0005540825 podman[83934]: 2025-12-01 09:48:30.324749369 +0000 UTC m=+0.080754797 container create 9823303bf02fb1919ef44c19b6db9e86f02256c7fa1027c2fe79c544145044fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:48:30 np0005540825 systemd[1]: Started libpod-conmon-9823303bf02fb1919ef44c19b6db9e86f02256c7fa1027c2fe79c544145044fb.scope.
Dec  1 04:48:30 np0005540825 podman[83934]: 2025-12-01 09:48:30.280980244 +0000 UTC m=+0.036985762 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:48:30 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:48:30 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0e45d1744d8242fe587b07cca3abdd8ed627a2891c5ff3d50bd1c5c8baaa954/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:30 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0e45d1744d8242fe587b07cca3abdd8ed627a2891c5ff3d50bd1c5c8baaa954/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:30 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0e45d1744d8242fe587b07cca3abdd8ed627a2891c5ff3d50bd1c5c8baaa954/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:30 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0e45d1744d8242fe587b07cca3abdd8ed627a2891c5ff3d50bd1c5c8baaa954/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:30 np0005540825 podman[83934]: 2025-12-01 09:48:30.445811717 +0000 UTC m=+0.201817175 container init 9823303bf02fb1919ef44c19b6db9e86f02256c7fa1027c2fe79c544145044fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_panini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1)
Dec  1 04:48:30 np0005540825 podman[83934]: 2025-12-01 09:48:30.456146527 +0000 UTC m=+0.212151965 container start 9823303bf02fb1919ef44c19b6db9e86f02256c7fa1027c2fe79c544145044fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_panini, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  1 04:48:30 np0005540825 podman[83934]: 2025-12-01 09:48:30.464542394 +0000 UTC m=+0.220547862 container attach 9823303bf02fb1919ef44c19b6db9e86f02256c7fa1027c2fe79c544145044fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_panini, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  1 04:48:30 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/3734025374; not ready for session (expect reconnect)
Dec  1 04:48:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  1 04:48:30 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  1 04:48:30 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  1 04:48:30 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/1037904125; not ready for session (expect reconnect)
Dec  1 04:48:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  1 04:48:30 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  1 04:48:30 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  1 04:48:30 np0005540825 ceph-mon[74416]: from='osd.1 [v2:192.168.122.100:6802/1037904125,v1:192.168.122.100:6803/1037904125]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  1 04:48:30 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:30 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:30 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:30 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:30 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec  1 04:48:30 np0005540825 ceph-mon[74416]: Adjusting osd_memory_target on compute-1 to  5248M
Dec  1 04:48:30 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:31 np0005540825 amazing_panini[83950]: [
Dec  1 04:48:31 np0005540825 amazing_panini[83950]:    {
Dec  1 04:48:31 np0005540825 amazing_panini[83950]:        "available": false,
Dec  1 04:48:31 np0005540825 amazing_panini[83950]:        "being_replaced": false,
Dec  1 04:48:31 np0005540825 amazing_panini[83950]:        "ceph_device_lvm": false,
Dec  1 04:48:31 np0005540825 amazing_panini[83950]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec  1 04:48:31 np0005540825 amazing_panini[83950]:        "lsm_data": {},
Dec  1 04:48:31 np0005540825 amazing_panini[83950]:        "lvs": [],
Dec  1 04:48:31 np0005540825 amazing_panini[83950]:        "path": "/dev/sr0",
Dec  1 04:48:31 np0005540825 amazing_panini[83950]:        "rejected_reasons": [
Dec  1 04:48:31 np0005540825 amazing_panini[83950]:            "Insufficient space (<5GB)",
Dec  1 04:48:31 np0005540825 amazing_panini[83950]:            "Has a FileSystem"
Dec  1 04:48:31 np0005540825 amazing_panini[83950]:        ],
Dec  1 04:48:31 np0005540825 amazing_panini[83950]:        "sys_api": {
Dec  1 04:48:31 np0005540825 amazing_panini[83950]:            "actuators": null,
Dec  1 04:48:31 np0005540825 amazing_panini[83950]:            "device_nodes": [
Dec  1 04:48:31 np0005540825 amazing_panini[83950]:                "sr0"
Dec  1 04:48:31 np0005540825 amazing_panini[83950]:            ],
Dec  1 04:48:31 np0005540825 amazing_panini[83950]:            "devname": "sr0",
Dec  1 04:48:31 np0005540825 amazing_panini[83950]:            "human_readable_size": "482.00 KB",
Dec  1 04:48:31 np0005540825 amazing_panini[83950]:            "id_bus": "ata",
Dec  1 04:48:31 np0005540825 amazing_panini[83950]:            "model": "QEMU DVD-ROM",
Dec  1 04:48:31 np0005540825 amazing_panini[83950]:            "nr_requests": "2",
Dec  1 04:48:31 np0005540825 amazing_panini[83950]:            "parent": "/dev/sr0",
Dec  1 04:48:31 np0005540825 amazing_panini[83950]:            "partitions": {},
Dec  1 04:48:31 np0005540825 amazing_panini[83950]:            "path": "/dev/sr0",
Dec  1 04:48:31 np0005540825 amazing_panini[83950]:            "removable": "1",
Dec  1 04:48:31 np0005540825 amazing_panini[83950]:            "rev": "2.5+",
Dec  1 04:48:31 np0005540825 amazing_panini[83950]:            "ro": "0",
Dec  1 04:48:31 np0005540825 amazing_panini[83950]:            "rotational": "1",
Dec  1 04:48:31 np0005540825 amazing_panini[83950]:            "sas_address": "",
Dec  1 04:48:31 np0005540825 amazing_panini[83950]:            "sas_device_handle": "",
Dec  1 04:48:31 np0005540825 amazing_panini[83950]:            "scheduler_mode": "mq-deadline",
Dec  1 04:48:31 np0005540825 amazing_panini[83950]:            "sectors": 0,
Dec  1 04:48:31 np0005540825 amazing_panini[83950]:            "sectorsize": "2048",
Dec  1 04:48:31 np0005540825 amazing_panini[83950]:            "size": 493568.0,
Dec  1 04:48:31 np0005540825 amazing_panini[83950]:            "support_discard": "2048",
Dec  1 04:48:31 np0005540825 amazing_panini[83950]:            "type": "disk",
Dec  1 04:48:31 np0005540825 amazing_panini[83950]:            "vendor": "QEMU"
Dec  1 04:48:31 np0005540825 amazing_panini[83950]:        }
Dec  1 04:48:31 np0005540825 amazing_panini[83950]:    }
Dec  1 04:48:31 np0005540825 amazing_panini[83950]: ]
Dec  1 04:48:31 np0005540825 systemd[1]: libpod-9823303bf02fb1919ef44c19b6db9e86f02256c7fa1027c2fe79c544145044fb.scope: Deactivated successfully.
Dec  1 04:48:31 np0005540825 podman[83934]: 2025-12-01 09:48:31.304020242 +0000 UTC m=+1.060025680 container died 9823303bf02fb1919ef44c19b6db9e86f02256c7fa1027c2fe79c544145044fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:48:31 np0005540825 systemd[1]: var-lib-containers-storage-overlay-a0e45d1744d8242fe587b07cca3abdd8ed627a2891c5ff3d50bd1c5c8baaa954-merged.mount: Deactivated successfully.
Dec  1 04:48:31 np0005540825 podman[83934]: 2025-12-01 09:48:31.394908353 +0000 UTC m=+1.150913781 container remove 9823303bf02fb1919ef44c19b6db9e86f02256c7fa1027c2fe79c544145044fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec  1 04:48:31 np0005540825 systemd[1]: libpod-conmon-9823303bf02fb1919ef44c19b6db9e86f02256c7fa1027c2fe79c544145044fb.scope: Deactivated successfully.
Dec  1 04:48:31 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:48:31 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:31 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:48:31 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:31 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:48:31 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:31 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:48:31 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:31 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec  1 04:48:31 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec  1 04:48:31 np0005540825 ceph-mgr[74709]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 128.0M
Dec  1 04:48:31 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 128.0M
Dec  1 04:48:31 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec  1 04:48:31 np0005540825 ceph-mgr[74709]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134220595: error parsing value: Value '134220595' is below minimum 939524096
Dec  1 04:48:31 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134220595: error parsing value: Value '134220595' is below minimum 939524096
Dec  1 04:48:31 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/3734025374; not ready for session (expect reconnect)
Dec  1 04:48:31 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  1 04:48:31 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  1 04:48:31 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  1 04:48:31 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/1037904125; not ready for session (expect reconnect)
Dec  1 04:48:31 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  1 04:48:31 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  1 04:48:31 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  1 04:48:31 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:31 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:31 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:31 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:48:31 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec  1 04:48:32 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  1 04:48:32 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/3734025374; not ready for session (expect reconnect)
Dec  1 04:48:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  1 04:48:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  1 04:48:32 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  1 04:48:32 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/1037904125; not ready for session (expect reconnect)
Dec  1 04:48:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  1 04:48:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  1 04:48:32 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  1 04:48:32 np0005540825 ceph-mon[74416]: Adjusting osd_memory_target on compute-0 to 128.0M
Dec  1 04:48:32 np0005540825 ceph-mon[74416]: Unable to set osd_memory_target on compute-0 to 134220595: error parsing value: Value '134220595' is below minimum 939524096
Dec  1 04:48:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:48:33 np0005540825 ceph-osd[82809]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 24.996 iops: 6398.922 elapsed_sec: 0.469
Dec  1 04:48:33 np0005540825 ceph-osd[82809]: log_channel(cluster) log [WRN] : OSD bench result of 6398.922469 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  1 04:48:33 np0005540825 ceph-osd[82809]: osd.1 0 waiting for initial osdmap
Dec  1 04:48:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-osd-1[82805]: 2025-12-01T09:48:33.164+0000 7f49ea9b1640 -1 osd.1 0 waiting for initial osdmap
Dec  1 04:48:33 np0005540825 ceph-osd[82809]: osd.1 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Dec  1 04:48:33 np0005540825 ceph-osd[82809]: osd.1 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Dec  1 04:48:33 np0005540825 ceph-osd[82809]: osd.1 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Dec  1 04:48:33 np0005540825 ceph-osd[82809]: osd.1 8 check_osdmap_features require_osd_release unknown -> squid
Dec  1 04:48:33 np0005540825 ceph-osd[82809]: osd.1 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  1 04:48:33 np0005540825 ceph-osd[82809]: osd.1 8 set_numa_affinity not setting numa affinity
Dec  1 04:48:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-osd-1[82805]: 2025-12-01T09:48:33.187+0000 7f49e5fd9640 -1 osd.1 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  1 04:48:33 np0005540825 ceph-osd[82809]: osd.1 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Dec  1 04:48:33 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/3734025374; not ready for session (expect reconnect)
Dec  1 04:48:33 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  1 04:48:33 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  1 04:48:33 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  1 04:48:33 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/1037904125; not ready for session (expect reconnect)
Dec  1 04:48:33 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  1 04:48:33 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  1 04:48:33 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  1 04:48:33 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Dec  1 04:48:33 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  1 04:48:33 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e9 e9: 2 total, 2 up, 2 in
Dec  1 04:48:33 np0005540825 ceph-mon[74416]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.101:6800/3734025374,v1:192.168.122.101:6801/3734025374] boot
Dec  1 04:48:33 np0005540825 ceph-mon[74416]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6802/1037904125,v1:192.168.122.100:6803/1037904125] boot
Dec  1 04:48:33 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 2 up, 2 in
Dec  1 04:48:33 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  1 04:48:33 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  1 04:48:33 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  1 04:48:33 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  1 04:48:33 np0005540825 ceph-osd[82809]: osd.1 9 state: booting -> active
Dec  1 04:48:33 np0005540825 ceph-mon[74416]: OSD bench result of 6976.416651 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  1 04:48:33 np0005540825 ceph-mon[74416]: OSD bench result of 6398.922469 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  1 04:48:34 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  1 04:48:34 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Dec  1 04:48:34 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  1 04:48:34 np0005540825 ceph-mon[74416]: osd.0 [v2:192.168.122.101:6800/3734025374,v1:192.168.122.101:6801/3734025374] boot
Dec  1 04:48:34 np0005540825 ceph-mon[74416]: osd.1 [v2:192.168.122.100:6802/1037904125,v1:192.168.122.100:6803/1037904125] boot
Dec  1 04:48:34 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e10 e10: 2 total, 2 up, 2 in
Dec  1 04:48:34 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 2 up, 2 in
Dec  1 04:48:35 np0005540825 ceph-mgr[74709]: [devicehealth INFO root] creating mgr pool
Dec  1 04:48:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Dec  1 04:48:35 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Dec  1 04:48:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Dec  1 04:48:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  1 04:48:36 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v45: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:48:36 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec  1 04:48:36 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e11 e11: 2 total, 2 up, 2 in
Dec  1 04:48:36 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Dec  1 04:48:36 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Dec  1 04:48:36 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Dec  1 04:48:36 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Dec  1 04:48:36 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 2 up, 2 in
Dec  1 04:48:36 np0005540825 ceph-osd[82809]: osd.1 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec  1 04:48:36 np0005540825 ceph-osd[82809]: osd.1 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Dec  1 04:48:36 np0005540825 ceph-osd[82809]: osd.1 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec  1 04:48:36 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 11 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=11) [1] r=0 lpr=11 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:48:36 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Dec  1 04:48:36 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Dec  1 04:48:36 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Dec  1 04:48:37 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Dec  1 04:48:37 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec  1 04:48:37 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e12 e12: 2 total, 2 up, 2 in
Dec  1 04:48:37 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Dec  1 04:48:37 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec  1 04:48:37 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Dec  1 04:48:37 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 12 pg[1.0( empty local-lis/les=11/12 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=11) [1] r=0 lpr=11 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:48:37 np0005540825 ceph-mgr[74709]: [devicehealth INFO root] creating main.db for devicehealth
Dec  1 04:48:37 np0005540825 ceph-mgr[74709]: [devicehealth INFO root] Check health
Dec  1 04:48:37 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec  1 04:48:37 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec  1 04:48:37 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  1 04:48:37 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  1 04:48:37 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:48:38 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v48: 1 pgs: 1 unknown; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:48:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Dec  1 04:48:38 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec  1 04:48:38 np0005540825 ceph-mon[74416]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec  1 04:48:38 np0005540825 ceph-mon[74416]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec  1 04:48:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e13 e13: 2 total, 2 up, 2 in
Dec  1 04:48:38 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Dec  1 04:48:39 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.fospow(active, since 88s)
Dec  1 04:48:40 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:48:41 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:48:41 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:48:41 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:48:41 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:48:41 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:48:41 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:48:42 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:48:42 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:48:44 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:48:46 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:48:47 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:48:48 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:48:50 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:48:52 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:48:53 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:48:53 np0005540825 python3[85167]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:48:54 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:48:54 np0005540825 podman[85169]: 2025-12-01 09:48:54.111069413 +0000 UTC m=+0.050393705 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:48:54 np0005540825 podman[85169]: 2025-12-01 09:48:54.255703769 +0000 UTC m=+0.195028021 container create bc8115a994edc39bbe991590f6d99b083d05fbddc157b3422c17a1636c9f72da (image=quay.io/ceph/ceph:v19, name=condescending_kepler, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  1 04:48:54 np0005540825 systemd[1]: Started libpod-conmon-bc8115a994edc39bbe991590f6d99b083d05fbddc157b3422c17a1636c9f72da.scope.
Dec  1 04:48:54 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:48:54 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a525b19b1d923dc27951111ebaee093d5cb22e376090839a4d4368b5e08c0a4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:54 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a525b19b1d923dc27951111ebaee093d5cb22e376090839a4d4368b5e08c0a4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:54 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a525b19b1d923dc27951111ebaee093d5cb22e376090839a4d4368b5e08c0a4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:54 np0005540825 podman[85169]: 2025-12-01 09:48:54.357026822 +0000 UTC m=+0.296351064 container init bc8115a994edc39bbe991590f6d99b083d05fbddc157b3422c17a1636c9f72da (image=quay.io/ceph/ceph:v19, name=condescending_kepler, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:48:54 np0005540825 podman[85169]: 2025-12-01 09:48:54.36396346 +0000 UTC m=+0.303287692 container start bc8115a994edc39bbe991590f6d99b083d05fbddc157b3422c17a1636c9f72da (image=quay.io/ceph/ceph:v19, name=condescending_kepler, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:48:54 np0005540825 podman[85169]: 2025-12-01 09:48:54.404791495 +0000 UTC m=+0.344115717 container attach bc8115a994edc39bbe991590f6d99b083d05fbddc157b3422c17a1636c9f72da (image=quay.io/ceph/ceph:v19, name=condescending_kepler, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  1 04:48:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec  1 04:48:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3045436796' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  1 04:48:54 np0005540825 condescending_kepler[85185]: 
Dec  1 04:48:54 np0005540825 condescending_kepler[85185]: {"fsid":"365f19c2-81e5-5edd-b6b4-280555214d3a","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":121,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":13,"num_osds":2,"num_up_osds":2,"osd_up_since":1764582513,"num_in_osds":2,"osd_in_since":1764582494,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":55791616,"bytes_avail":42885492736,"bytes_total":42941284352},"fsmap":{"epoch":1,"btime":"2025-12-01T09:46:50:475394+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-01T09:48:14.037975+0000","services":{}},"progress_events":{}}
Dec  1 04:48:54 np0005540825 systemd[1]: libpod-bc8115a994edc39bbe991590f6d99b083d05fbddc157b3422c17a1636c9f72da.scope: Deactivated successfully.
Dec  1 04:48:54 np0005540825 podman[85169]: 2025-12-01 09:48:54.871038038 +0000 UTC m=+0.810362250 container died bc8115a994edc39bbe991590f6d99b083d05fbddc157b3422c17a1636c9f72da (image=quay.io/ceph/ceph:v19, name=condescending_kepler, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  1 04:48:54 np0005540825 systemd[1]: var-lib-containers-storage-overlay-4a525b19b1d923dc27951111ebaee093d5cb22e376090839a4d4368b5e08c0a4-merged.mount: Deactivated successfully.
Dec  1 04:48:54 np0005540825 podman[85169]: 2025-12-01 09:48:54.953445639 +0000 UTC m=+0.892769861 container remove bc8115a994edc39bbe991590f6d99b083d05fbddc157b3422c17a1636c9f72da (image=quay.io/ceph/ceph:v19, name=condescending_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:48:54 np0005540825 systemd[1]: libpod-conmon-bc8115a994edc39bbe991590f6d99b083d05fbddc157b3422c17a1636c9f72da.scope: Deactivated successfully.
Dec  1 04:48:55 np0005540825 python3[85249]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:48:55 np0005540825 podman[85250]: 2025-12-01 09:48:55.584225387 +0000 UTC m=+0.049310636 container create a0aead0ce8ef02d34312703150ffd887e8bfad61b258390be794d026d52eb9dd (image=quay.io/ceph/ceph:v19, name=affectionate_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:48:55 np0005540825 systemd[1]: Started libpod-conmon-a0aead0ce8ef02d34312703150ffd887e8bfad61b258390be794d026d52eb9dd.scope.
Dec  1 04:48:55 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:48:55 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92cf9f5469cf06779543ab966044f0ff6a21cda38499a932b2688e1d37d8fbfc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:55 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92cf9f5469cf06779543ab966044f0ff6a21cda38499a932b2688e1d37d8fbfc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:55 np0005540825 podman[85250]: 2025-12-01 09:48:55.565062558 +0000 UTC m=+0.030147837 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:48:55 np0005540825 podman[85250]: 2025-12-01 09:48:55.665425135 +0000 UTC m=+0.130510424 container init a0aead0ce8ef02d34312703150ffd887e8bfad61b258390be794d026d52eb9dd (image=quay.io/ceph/ceph:v19, name=affectionate_sanderson, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec  1 04:48:55 np0005540825 podman[85250]: 2025-12-01 09:48:55.672649781 +0000 UTC m=+0.137735030 container start a0aead0ce8ef02d34312703150ffd887e8bfad61b258390be794d026d52eb9dd (image=quay.io/ceph/ceph:v19, name=affectionate_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:48:55 np0005540825 podman[85250]: 2025-12-01 09:48:55.676528896 +0000 UTC m=+0.141614245 container attach a0aead0ce8ef02d34312703150ffd887e8bfad61b258390be794d026d52eb9dd (image=quay.io/ceph/ceph:v19, name=affectionate_sanderson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:48:56 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec  1 04:48:56 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3233395636' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  1 04:48:56 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:48:56 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Dec  1 04:48:56 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/3233395636' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  1 04:48:56 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3233395636' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  1 04:48:56 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e14 e14: 2 total, 2 up, 2 in
Dec  1 04:48:56 np0005540825 affectionate_sanderson[85266]: pool 'vms' created
Dec  1 04:48:56 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Dec  1 04:48:56 np0005540825 systemd[1]: libpod-a0aead0ce8ef02d34312703150ffd887e8bfad61b258390be794d026d52eb9dd.scope: Deactivated successfully.
Dec  1 04:48:56 np0005540825 podman[85250]: 2025-12-01 09:48:56.403222001 +0000 UTC m=+0.868307250 container died a0aead0ce8ef02d34312703150ffd887e8bfad61b258390be794d026d52eb9dd (image=quay.io/ceph/ceph:v19, name=affectionate_sanderson, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:48:56 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 14 pg[2.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:48:56 np0005540825 systemd[1]: var-lib-containers-storage-overlay-92cf9f5469cf06779543ab966044f0ff6a21cda38499a932b2688e1d37d8fbfc-merged.mount: Deactivated successfully.
Dec  1 04:48:56 np0005540825 podman[85250]: 2025-12-01 09:48:56.485469957 +0000 UTC m=+0.950555206 container remove a0aead0ce8ef02d34312703150ffd887e8bfad61b258390be794d026d52eb9dd (image=quay.io/ceph/ceph:v19, name=affectionate_sanderson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  1 04:48:56 np0005540825 systemd[1]: libpod-conmon-a0aead0ce8ef02d34312703150ffd887e8bfad61b258390be794d026d52eb9dd.scope: Deactivated successfully.
Dec  1 04:48:56 np0005540825 python3[85331]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:48:56 np0005540825 podman[85332]: 2025-12-01 09:48:56.909486977 +0000 UTC m=+0.066420769 container create e8348dbc1a2833f21e985a6e51fedb11d199f1d611466bc12c8a218ef406ed4b (image=quay.io/ceph/ceph:v19, name=kind_tesla, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:48:56 np0005540825 systemd[1]: Started libpod-conmon-e8348dbc1a2833f21e985a6e51fedb11d199f1d611466bc12c8a218ef406ed4b.scope.
Dec  1 04:48:56 np0005540825 podman[85332]: 2025-12-01 09:48:56.865901347 +0000 UTC m=+0.022835159 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:48:56 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:48:56 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab23a002d2221df787f48e9abdc2ee3b3eafe70b25ffdbc6e9f6acfbe714f28b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:56 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab23a002d2221df787f48e9abdc2ee3b3eafe70b25ffdbc6e9f6acfbe714f28b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:57 np0005540825 podman[85332]: 2025-12-01 09:48:57.014773308 +0000 UTC m=+0.171707120 container init e8348dbc1a2833f21e985a6e51fedb11d199f1d611466bc12c8a218ef406ed4b (image=quay.io/ceph/ceph:v19, name=kind_tesla, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Dec  1 04:48:57 np0005540825 podman[85332]: 2025-12-01 09:48:57.023942916 +0000 UTC m=+0.180876708 container start e8348dbc1a2833f21e985a6e51fedb11d199f1d611466bc12c8a218ef406ed4b (image=quay.io/ceph/ceph:v19, name=kind_tesla, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:48:57 np0005540825 podman[85332]: 2025-12-01 09:48:57.027461671 +0000 UTC m=+0.184395493 container attach e8348dbc1a2833f21e985a6e51fedb11d199f1d611466bc12c8a218ef406ed4b (image=quay.io/ceph/ceph:v19, name=kind_tesla, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec  1 04:48:57 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/3233395636' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  1 04:48:57 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Dec  1 04:48:57 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec  1 04:48:57 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3325360571' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  1 04:48:57 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e15 e15: 2 total, 2 up, 2 in
Dec  1 04:48:57 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Dec  1 04:48:57 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 15 pg[2.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:48:58 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v61: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:48:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:48:58 np0005540825 ceph-mon[74416]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  1 04:48:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Dec  1 04:48:58 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/3325360571' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  1 04:48:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3325360571' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  1 04:48:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e16 e16: 2 total, 2 up, 2 in
Dec  1 04:48:58 np0005540825 kind_tesla[85348]: pool 'volumes' created
Dec  1 04:48:58 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e16: 2 total, 2 up, 2 in
Dec  1 04:48:58 np0005540825 systemd[1]: libpod-e8348dbc1a2833f21e985a6e51fedb11d199f1d611466bc12c8a218ef406ed4b.scope: Deactivated successfully.
Dec  1 04:48:58 np0005540825 podman[85375]: 2025-12-01 09:48:58.575207924 +0000 UTC m=+0.035181534 container died e8348dbc1a2833f21e985a6e51fedb11d199f1d611466bc12c8a218ef406ed4b (image=quay.io/ceph/ceph:v19, name=kind_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  1 04:48:58 np0005540825 systemd[1]: var-lib-containers-storage-overlay-ab23a002d2221df787f48e9abdc2ee3b3eafe70b25ffdbc6e9f6acfbe714f28b-merged.mount: Deactivated successfully.
Dec  1 04:48:58 np0005540825 podman[85375]: 2025-12-01 09:48:58.679932739 +0000 UTC m=+0.139906379 container remove e8348dbc1a2833f21e985a6e51fedb11d199f1d611466bc12c8a218ef406ed4b (image=quay.io/ceph/ceph:v19, name=kind_tesla, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:48:58 np0005540825 systemd[1]: libpod-conmon-e8348dbc1a2833f21e985a6e51fedb11d199f1d611466bc12c8a218ef406ed4b.scope: Deactivated successfully.
Dec  1 04:48:59 np0005540825 python3[85415]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:48:59 np0005540825 podman[85416]: 2025-12-01 09:48:59.13059067 +0000 UTC m=+0.050792106 container create 7124e8b1b1112d0f3d7273af4f4f4050edaa0341bcf95ad2ce406f08ea1d8435 (image=quay.io/ceph/ceph:v19, name=elated_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:48:59 np0005540825 systemd[1]: Started libpod-conmon-7124e8b1b1112d0f3d7273af4f4f4050edaa0341bcf95ad2ce406f08ea1d8435.scope.
Dec  1 04:48:59 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:48:59 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc51229242469076027ebf9cc585680667a302b0f7be2095c7949b8b36eaad9a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:59 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc51229242469076027ebf9cc585680667a302b0f7be2095c7949b8b36eaad9a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:48:59 np0005540825 podman[85416]: 2025-12-01 09:48:59.110861816 +0000 UTC m=+0.031063352 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:48:59 np0005540825 podman[85416]: 2025-12-01 09:48:59.208904321 +0000 UTC m=+0.129105767 container init 7124e8b1b1112d0f3d7273af4f4f4050edaa0341bcf95ad2ce406f08ea1d8435 (image=quay.io/ceph/ceph:v19, name=elated_allen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:48:59 np0005540825 podman[85416]: 2025-12-01 09:48:59.215325224 +0000 UTC m=+0.135526680 container start 7124e8b1b1112d0f3d7273af4f4f4050edaa0341bcf95ad2ce406f08ea1d8435 (image=quay.io/ceph/ceph:v19, name=elated_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:48:59 np0005540825 podman[85416]: 2025-12-01 09:48:59.219026355 +0000 UTC m=+0.139227871 container attach 7124e8b1b1112d0f3d7273af4f4f4050edaa0341bcf95ad2ce406f08ea1d8435 (image=quay.io/ceph/ceph:v19, name=elated_allen, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  1 04:48:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Dec  1 04:48:59 np0005540825 ceph-mon[74416]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  1 04:48:59 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/3325360571' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  1 04:48:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e17 e17: 2 total, 2 up, 2 in
Dec  1 04:48:59 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e17: 2 total, 2 up, 2 in
Dec  1 04:48:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec  1 04:48:59 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1854191706' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  1 04:49:00 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v64: 3 pgs: 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:49:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Dec  1 04:49:00 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/1854191706' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  1 04:49:00 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1854191706' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  1 04:49:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e18 e18: 2 total, 2 up, 2 in
Dec  1 04:49:00 np0005540825 elated_allen[85432]: pool 'backups' created
Dec  1 04:49:00 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e18: 2 total, 2 up, 2 in
Dec  1 04:49:00 np0005540825 systemd[1]: libpod-7124e8b1b1112d0f3d7273af4f4f4050edaa0341bcf95ad2ce406f08ea1d8435.scope: Deactivated successfully.
Dec  1 04:49:00 np0005540825 podman[85416]: 2025-12-01 09:49:00.667328885 +0000 UTC m=+1.587530311 container died 7124e8b1b1112d0f3d7273af4f4f4050edaa0341bcf95ad2ce406f08ea1d8435 (image=quay.io/ceph/ceph:v19, name=elated_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Dec  1 04:49:00 np0005540825 systemd[1]: var-lib-containers-storage-overlay-dc51229242469076027ebf9cc585680667a302b0f7be2095c7949b8b36eaad9a-merged.mount: Deactivated successfully.
Dec  1 04:49:00 np0005540825 podman[85416]: 2025-12-01 09:49:00.710414341 +0000 UTC m=+1.630615827 container remove 7124e8b1b1112d0f3d7273af4f4f4050edaa0341bcf95ad2ce406f08ea1d8435 (image=quay.io/ceph/ceph:v19, name=elated_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  1 04:49:00 np0005540825 systemd[1]: libpod-conmon-7124e8b1b1112d0f3d7273af4f4f4050edaa0341bcf95ad2ce406f08ea1d8435.scope: Deactivated successfully.
Dec  1 04:49:01 np0005540825 python3[85498]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:49:01 np0005540825 podman[85499]: 2025-12-01 09:49:01.088368624 +0000 UTC m=+0.047832516 container create 85b1155b489aa556b7c6816d9945904eb318665ec1b660b7c0b54df8e6603b23 (image=quay.io/ceph/ceph:v19, name=condescending_babbage, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 04:49:01 np0005540825 systemd[1]: Started libpod-conmon-85b1155b489aa556b7c6816d9945904eb318665ec1b660b7c0b54df8e6603b23.scope.
Dec  1 04:49:01 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:49:01 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/655a2fd7a6d58eb4c5520cd3658ae8f4f5160a297915cff1aa1099cc8b02716d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:01 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/655a2fd7a6d58eb4c5520cd3658ae8f4f5160a297915cff1aa1099cc8b02716d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:01 np0005540825 podman[85499]: 2025-12-01 09:49:01.07086671 +0000 UTC m=+0.030330622 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:49:01 np0005540825 podman[85499]: 2025-12-01 09:49:01.181697081 +0000 UTC m=+0.141161003 container init 85b1155b489aa556b7c6816d9945904eb318665ec1b660b7c0b54df8e6603b23 (image=quay.io/ceph/ceph:v19, name=condescending_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:49:01 np0005540825 podman[85499]: 2025-12-01 09:49:01.193978483 +0000 UTC m=+0.153442375 container start 85b1155b489aa556b7c6816d9945904eb318665ec1b660b7c0b54df8e6603b23 (image=quay.io/ceph/ceph:v19, name=condescending_babbage, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  1 04:49:01 np0005540825 podman[85499]: 2025-12-01 09:49:01.198065804 +0000 UTC m=+0.157529706 container attach 85b1155b489aa556b7c6816d9945904eb318665ec1b660b7c0b54df8e6603b23 (image=quay.io/ceph/ceph:v19, name=condescending_babbage, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec  1 04:49:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec  1 04:49:01 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3616089824' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  1 04:49:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Dec  1 04:49:01 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/1854191706' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  1 04:49:01 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/3616089824' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  1 04:49:01 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3616089824' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  1 04:49:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e19 e19: 2 total, 2 up, 2 in
Dec  1 04:49:01 np0005540825 condescending_babbage[85514]: pool 'images' created
Dec  1 04:49:01 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e19: 2 total, 2 up, 2 in
Dec  1 04:49:01 np0005540825 systemd[1]: libpod-85b1155b489aa556b7c6816d9945904eb318665ec1b660b7c0b54df8e6603b23.scope: Deactivated successfully.
Dec  1 04:49:01 np0005540825 podman[85499]: 2025-12-01 09:49:01.908247841 +0000 UTC m=+0.867711753 container died 85b1155b489aa556b7c6816d9945904eb318665ec1b660b7c0b54df8e6603b23 (image=quay.io/ceph/ceph:v19, name=condescending_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  1 04:49:01 np0005540825 systemd[1]: var-lib-containers-storage-overlay-655a2fd7a6d58eb4c5520cd3658ae8f4f5160a297915cff1aa1099cc8b02716d-merged.mount: Deactivated successfully.
Dec  1 04:49:01 np0005540825 podman[85499]: 2025-12-01 09:49:01.978085282 +0000 UTC m=+0.937549174 container remove 85b1155b489aa556b7c6816d9945904eb318665ec1b660b7c0b54df8e6603b23 (image=quay.io/ceph/ceph:v19, name=condescending_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  1 04:49:01 np0005540825 systemd[1]: libpod-conmon-85b1155b489aa556b7c6816d9945904eb318665ec1b660b7c0b54df8e6603b23.scope: Deactivated successfully.
Dec  1 04:49:02 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v67: 5 pgs: 2 unknown, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:49:02 np0005540825 python3[85577]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:49:02 np0005540825 podman[85578]: 2025-12-01 09:49:02.421664841 +0000 UTC m=+0.069377999 container create 16b4cd80e2a1338f28ed44f90d834113aa185bb94a1b8ee699e16909d05e0209 (image=quay.io/ceph/ceph:v19, name=competent_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:49:02 np0005540825 systemd[1]: Started libpod-conmon-16b4cd80e2a1338f28ed44f90d834113aa185bb94a1b8ee699e16909d05e0209.scope.
Dec  1 04:49:02 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:49:02 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61fb93d397557c2257659fe6c2c71ab4857ed1c09af610535a7edaca3859000c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:02 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61fb93d397557c2257659fe6c2c71ab4857ed1c09af610535a7edaca3859000c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:02 np0005540825 podman[85578]: 2025-12-01 09:49:02.393563291 +0000 UTC m=+0.041276529 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:49:02 np0005540825 podman[85578]: 2025-12-01 09:49:02.501541414 +0000 UTC m=+0.149254612 container init 16b4cd80e2a1338f28ed44f90d834113aa185bb94a1b8ee699e16909d05e0209 (image=quay.io/ceph/ceph:v19, name=competent_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS)
Dec  1 04:49:02 np0005540825 podman[85578]: 2025-12-01 09:49:02.507549997 +0000 UTC m=+0.155263185 container start 16b4cd80e2a1338f28ed44f90d834113aa185bb94a1b8ee699e16909d05e0209 (image=quay.io/ceph/ceph:v19, name=competent_ritchie, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec  1 04:49:02 np0005540825 podman[85578]: 2025-12-01 09:49:02.51690852 +0000 UTC m=+0.164621768 container attach 16b4cd80e2a1338f28ed44f90d834113aa185bb94a1b8ee699e16909d05e0209 (image=quay.io/ceph/ceph:v19, name=competent_ritchie, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec  1 04:49:02 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Dec  1 04:49:02 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e20 e20: 2 total, 2 up, 2 in
Dec  1 04:49:02 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e20: 2 total, 2 up, 2 in
Dec  1 04:49:02 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec  1 04:49:02 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1229914286' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  1 04:49:02 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/3616089824' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  1 04:49:03 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e20 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:49:03 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Dec  1 04:49:03 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1229914286' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  1 04:49:03 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e21 e21: 2 total, 2 up, 2 in
Dec  1 04:49:03 np0005540825 competent_ritchie[85593]: pool 'cephfs.cephfs.meta' created
Dec  1 04:49:03 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e21: 2 total, 2 up, 2 in
Dec  1 04:49:03 np0005540825 systemd[1]: libpod-16b4cd80e2a1338f28ed44f90d834113aa185bb94a1b8ee699e16909d05e0209.scope: Deactivated successfully.
Dec  1 04:49:03 np0005540825 podman[85578]: 2025-12-01 09:49:03.939200457 +0000 UTC m=+1.586913615 container died 16b4cd80e2a1338f28ed44f90d834113aa185bb94a1b8ee699e16909d05e0209 (image=quay.io/ceph/ceph:v19, name=competent_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid)
Dec  1 04:49:03 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/1229914286' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  1 04:49:03 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/1229914286' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  1 04:49:03 np0005540825 systemd[1]: var-lib-containers-storage-overlay-61fb93d397557c2257659fe6c2c71ab4857ed1c09af610535a7edaca3859000c-merged.mount: Deactivated successfully.
Dec  1 04:49:04 np0005540825 podman[85578]: 2025-12-01 09:49:03.999720376 +0000 UTC m=+1.647433534 container remove 16b4cd80e2a1338f28ed44f90d834113aa185bb94a1b8ee699e16909d05e0209 (image=quay.io/ceph/ceph:v19, name=competent_ritchie, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  1 04:49:04 np0005540825 systemd[1]: libpod-conmon-16b4cd80e2a1338f28ed44f90d834113aa185bb94a1b8ee699e16909d05e0209.scope: Deactivated successfully.
Dec  1 04:49:04 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v70: 6 pgs: 3 unknown, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:49:04 np0005540825 python3[85657]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:49:04 np0005540825 podman[85658]: 2025-12-01 09:49:04.360090671 +0000 UTC m=+0.050061026 container create 61d84f0f08f7e31e531f2c9b3d5cc33c5ef74031b17c6467f1ebcbae99b2e6f8 (image=quay.io/ceph/ceph:v19, name=friendly_leavitt, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Dec  1 04:49:04 np0005540825 systemd[1]: Started libpod-conmon-61d84f0f08f7e31e531f2c9b3d5cc33c5ef74031b17c6467f1ebcbae99b2e6f8.scope.
Dec  1 04:49:04 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:49:04 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e86a195faf919de372c20dc7810bcf7e1846cdcdbf0318ceb3ba4760414ca911/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:04 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e86a195faf919de372c20dc7810bcf7e1846cdcdbf0318ceb3ba4760414ca911/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:04 np0005540825 podman[85658]: 2025-12-01 09:49:04.336354309 +0000 UTC m=+0.026324704 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:49:04 np0005540825 podman[85658]: 2025-12-01 09:49:04.445269237 +0000 UTC m=+0.135239582 container init 61d84f0f08f7e31e531f2c9b3d5cc33c5ef74031b17c6467f1ebcbae99b2e6f8 (image=quay.io/ceph/ceph:v19, name=friendly_leavitt, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec  1 04:49:04 np0005540825 podman[85658]: 2025-12-01 09:49:04.451706132 +0000 UTC m=+0.141676477 container start 61d84f0f08f7e31e531f2c9b3d5cc33c5ef74031b17c6467f1ebcbae99b2e6f8 (image=quay.io/ceph/ceph:v19, name=friendly_leavitt, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:49:04 np0005540825 podman[85658]: 2025-12-01 09:49:04.456011328 +0000 UTC m=+0.145981673 container attach 61d84f0f08f7e31e531f2c9b3d5cc33c5ef74031b17c6467f1ebcbae99b2e6f8 (image=quay.io/ceph/ceph:v19, name=friendly_leavitt, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  1 04:49:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec  1 04:49:04 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2077112150' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  1 04:49:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Dec  1 04:49:05 np0005540825 ceph-mon[74416]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  1 04:49:05 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2077112150' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  1 04:49:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e22 e22: 2 total, 2 up, 2 in
Dec  1 04:49:05 np0005540825 friendly_leavitt[85673]: pool 'cephfs.cephfs.data' created
Dec  1 04:49:05 np0005540825 systemd[1]: libpod-61d84f0f08f7e31e531f2c9b3d5cc33c5ef74031b17c6467f1ebcbae99b2e6f8.scope: Deactivated successfully.
Dec  1 04:49:05 np0005540825 podman[85658]: 2025-12-01 09:49:05.234719091 +0000 UTC m=+0.924689476 container died 61d84f0f08f7e31e531f2c9b3d5cc33c5ef74031b17c6467f1ebcbae99b2e6f8 (image=quay.io/ceph/ceph:v19, name=friendly_leavitt, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  1 04:49:05 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e22: 2 total, 2 up, 2 in
Dec  1 04:49:05 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 22 pg[7.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:05 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/2077112150' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  1 04:49:05 np0005540825 systemd[1]: var-lib-containers-storage-overlay-e86a195faf919de372c20dc7810bcf7e1846cdcdbf0318ceb3ba4760414ca911-merged.mount: Deactivated successfully.
Dec  1 04:49:05 np0005540825 podman[85658]: 2025-12-01 09:49:05.581972732 +0000 UTC m=+1.271943117 container remove 61d84f0f08f7e31e531f2c9b3d5cc33c5ef74031b17c6467f1ebcbae99b2e6f8 (image=quay.io/ceph/ceph:v19, name=friendly_leavitt, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  1 04:49:05 np0005540825 systemd[1]: libpod-conmon-61d84f0f08f7e31e531f2c9b3d5cc33c5ef74031b17c6467f1ebcbae99b2e6f8.scope: Deactivated successfully.
Dec  1 04:49:05 np0005540825 python3[85737]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:49:06 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v72: 7 pgs: 1 unknown, 1 creating+peering, 5 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:49:06 np0005540825 podman[85738]: 2025-12-01 09:49:06.015867049 +0000 UTC m=+0.030279100 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:49:06 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Dec  1 04:49:06 np0005540825 podman[85738]: 2025-12-01 09:49:06.530097022 +0000 UTC m=+0.544509103 container create ba8235c3e59e79b95452a2c33f32c7d3ebd991a80e7c9fb1114c66777ee9855f (image=quay.io/ceph/ceph:v19, name=jolly_haibt, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  1 04:49:06 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e23 e23: 2 total, 2 up, 2 in
Dec  1 04:49:06 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e23: 2 total, 2 up, 2 in
Dec  1 04:49:06 np0005540825 ceph-mon[74416]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  1 04:49:06 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/2077112150' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  1 04:49:06 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 23 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:06 np0005540825 systemd[1]: Started libpod-conmon-ba8235c3e59e79b95452a2c33f32c7d3ebd991a80e7c9fb1114c66777ee9855f.scope.
Dec  1 04:49:06 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:49:06 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbc032b702767b3c57f2e2424cb71cb611c554e201637a86df71dfaf4ec22537/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:06 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbc032b702767b3c57f2e2424cb71cb611c554e201637a86df71dfaf4ec22537/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:06 np0005540825 podman[85738]: 2025-12-01 09:49:06.616279345 +0000 UTC m=+0.630691386 container init ba8235c3e59e79b95452a2c33f32c7d3ebd991a80e7c9fb1114c66777ee9855f (image=quay.io/ceph/ceph:v19, name=jolly_haibt, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:49:06 np0005540825 podman[85738]: 2025-12-01 09:49:06.62164208 +0000 UTC m=+0.636054121 container start ba8235c3e59e79b95452a2c33f32c7d3ebd991a80e7c9fb1114c66777ee9855f (image=quay.io/ceph/ceph:v19, name=jolly_haibt, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec  1 04:49:06 np0005540825 podman[85738]: 2025-12-01 09:49:06.625170056 +0000 UTC m=+0.639582097 container attach ba8235c3e59e79b95452a2c33f32c7d3ebd991a80e7c9fb1114c66777ee9855f (image=quay.io/ceph/ceph:v19, name=jolly_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec  1 04:49:06 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Dec  1 04:49:06 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1501489114' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Dec  1 04:49:07 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Dec  1 04:49:07 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/1501489114' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Dec  1 04:49:07 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1501489114' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec  1 04:49:07 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e24 e24: 2 total, 2 up, 2 in
Dec  1 04:49:07 np0005540825 jolly_haibt[85754]: enabled application 'rbd' on pool 'vms'
Dec  1 04:49:07 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e24: 2 total, 2 up, 2 in
Dec  1 04:49:07 np0005540825 systemd[1]: libpod-ba8235c3e59e79b95452a2c33f32c7d3ebd991a80e7c9fb1114c66777ee9855f.scope: Deactivated successfully.
Dec  1 04:49:07 np0005540825 podman[85738]: 2025-12-01 09:49:07.910026562 +0000 UTC m=+1.924438653 container died ba8235c3e59e79b95452a2c33f32c7d3ebd991a80e7c9fb1114c66777ee9855f (image=quay.io/ceph/ceph:v19, name=jolly_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  1 04:49:07 np0005540825 systemd[1]: var-lib-containers-storage-overlay-dbc032b702767b3c57f2e2424cb71cb611c554e201637a86df71dfaf4ec22537-merged.mount: Deactivated successfully.
Dec  1 04:49:07 np0005540825 podman[85738]: 2025-12-01 09:49:07.96278787 +0000 UTC m=+1.977199921 container remove ba8235c3e59e79b95452a2c33f32c7d3ebd991a80e7c9fb1114c66777ee9855f (image=quay.io/ceph/ceph:v19, name=jolly_haibt, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:49:07 np0005540825 systemd[1]: libpod-conmon-ba8235c3e59e79b95452a2c33f32c7d3ebd991a80e7c9fb1114c66777ee9855f.scope: Deactivated successfully.
Dec  1 04:49:08 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 1 unknown, 1 creating+peering, 5 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:49:08 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e24 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:49:08 np0005540825 python3[85816]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:49:08 np0005540825 podman[85817]: 2025-12-01 09:49:08.526016158 +0000 UTC m=+0.072469043 container create 1a51767ed0c5d1e7aec794f72860f455d6d58e9b52ae4b39e79fb06d1b708d44 (image=quay.io/ceph/ceph:v19, name=loving_sammet, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:49:08 np0005540825 podman[85817]: 2025-12-01 09:49:08.478141842 +0000 UTC m=+0.024594797 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:49:08 np0005540825 systemd[1]: Started libpod-conmon-1a51767ed0c5d1e7aec794f72860f455d6d58e9b52ae4b39e79fb06d1b708d44.scope.
Dec  1 04:49:08 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:49:08 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e778eabb4314e4393bc8687db762d47b6c0acad4db9965215321c2c1c5f91c8c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:08 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e778eabb4314e4393bc8687db762d47b6c0acad4db9965215321c2c1c5f91c8c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:08 np0005540825 podman[85817]: 2025-12-01 09:49:08.670909241 +0000 UTC m=+0.217362176 container init 1a51767ed0c5d1e7aec794f72860f455d6d58e9b52ae4b39e79fb06d1b708d44 (image=quay.io/ceph/ceph:v19, name=loving_sammet, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  1 04:49:08 np0005540825 podman[85817]: 2025-12-01 09:49:08.676509063 +0000 UTC m=+0.222961928 container start 1a51767ed0c5d1e7aec794f72860f455d6d58e9b52ae4b39e79fb06d1b708d44 (image=quay.io/ceph/ceph:v19, name=loving_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:49:08 np0005540825 podman[85817]: 2025-12-01 09:49:08.680128321 +0000 UTC m=+0.226581266 container attach 1a51767ed0c5d1e7aec794f72860f455d6d58e9b52ae4b39e79fb06d1b708d44 (image=quay.io/ceph/ceph:v19, name=loving_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  1 04:49:09 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/1501489114' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec  1 04:49:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Dec  1 04:49:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4059149654' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Dec  1 04:49:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Dec  1 04:49:10 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/4059149654' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Dec  1 04:49:10 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4059149654' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec  1 04:49:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e25 e25: 2 total, 2 up, 2 in
Dec  1 04:49:10 np0005540825 loving_sammet[85832]: enabled application 'rbd' on pool 'volumes'
Dec  1 04:49:10 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e25: 2 total, 2 up, 2 in
Dec  1 04:49:10 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:49:10 np0005540825 systemd[1]: libpod-1a51767ed0c5d1e7aec794f72860f455d6d58e9b52ae4b39e79fb06d1b708d44.scope: Deactivated successfully.
Dec  1 04:49:10 np0005540825 podman[85817]: 2025-12-01 09:49:10.052815975 +0000 UTC m=+1.599268870 container died 1a51767ed0c5d1e7aec794f72860f455d6d58e9b52ae4b39e79fb06d1b708d44 (image=quay.io/ceph/ceph:v19, name=loving_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid)
Dec  1 04:49:10 np0005540825 systemd[1]: var-lib-containers-storage-overlay-e778eabb4314e4393bc8687db762d47b6c0acad4db9965215321c2c1c5f91c8c-merged.mount: Deactivated successfully.
Dec  1 04:49:10 np0005540825 podman[85817]: 2025-12-01 09:49:10.098977865 +0000 UTC m=+1.645430720 container remove 1a51767ed0c5d1e7aec794f72860f455d6d58e9b52ae4b39e79fb06d1b708d44 (image=quay.io/ceph/ceph:v19, name=loving_sammet, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:49:10 np0005540825 systemd[1]: libpod-conmon-1a51767ed0c5d1e7aec794f72860f455d6d58e9b52ae4b39e79fb06d1b708d44.scope: Deactivated successfully.
Dec  1 04:49:10 np0005540825 python3[85894]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:49:10 np0005540825 podman[85895]: 2025-12-01 09:49:10.536392307 +0000 UTC m=+0.054859096 container create 092f39eea2fa3867ec2c0315b753acd06d5049b091b54d47f4e68b95eb3b8457 (image=quay.io/ceph/ceph:v19, name=distracted_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:49:10 np0005540825 systemd[1]: Started libpod-conmon-092f39eea2fa3867ec2c0315b753acd06d5049b091b54d47f4e68b95eb3b8457.scope.
Dec  1 04:49:10 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:49:10 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dcb8947eee250dbe9f60fb951fbd348eb58c9ec4bde4bf9d86314d783604ef4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:10 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dcb8947eee250dbe9f60fb951fbd348eb58c9ec4bde4bf9d86314d783604ef4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:10 np0005540825 podman[85895]: 2025-12-01 09:49:10.607357779 +0000 UTC m=+0.125824588 container init 092f39eea2fa3867ec2c0315b753acd06d5049b091b54d47f4e68b95eb3b8457 (image=quay.io/ceph/ceph:v19, name=distracted_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:49:10 np0005540825 podman[85895]: 2025-12-01 09:49:10.51803875 +0000 UTC m=+0.036505579 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:49:10 np0005540825 podman[85895]: 2025-12-01 09:49:10.614460051 +0000 UTC m=+0.132926840 container start 092f39eea2fa3867ec2c0315b753acd06d5049b091b54d47f4e68b95eb3b8457 (image=quay.io/ceph/ceph:v19, name=distracted_bassi, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  1 04:49:10 np0005540825 podman[85895]: 2025-12-01 09:49:10.618032438 +0000 UTC m=+0.136499317 container attach 092f39eea2fa3867ec2c0315b753acd06d5049b091b54d47f4e68b95eb3b8457 (image=quay.io/ceph/ceph:v19, name=distracted_bassi, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:49:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Dec  1 04:49:10 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1668628802' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Dec  1 04:49:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Dec  1 04:49:11 np0005540825 ceph-mon[74416]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  1 04:49:11 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/4059149654' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec  1 04:49:11 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/1668628802' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Dec  1 04:49:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1668628802' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec  1 04:49:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e26 e26: 2 total, 2 up, 2 in
Dec  1 04:49:11 np0005540825 distracted_bassi[85910]: enabled application 'rbd' on pool 'backups'
Dec  1 04:49:11 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e26: 2 total, 2 up, 2 in
Dec  1 04:49:11 np0005540825 systemd[1]: libpod-092f39eea2fa3867ec2c0315b753acd06d5049b091b54d47f4e68b95eb3b8457.scope: Deactivated successfully.
Dec  1 04:49:11 np0005540825 conmon[85910]: conmon 092f39eea2fa3867ec2c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-092f39eea2fa3867ec2c0315b753acd06d5049b091b54d47f4e68b95eb3b8457.scope/container/memory.events
Dec  1 04:49:11 np0005540825 podman[85935]: 2025-12-01 09:49:11.144758318 +0000 UTC m=+0.063278714 container died 092f39eea2fa3867ec2c0315b753acd06d5049b091b54d47f4e68b95eb3b8457 (image=quay.io/ceph/ceph:v19, name=distracted_bassi, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  1 04:49:11 np0005540825 systemd[1]: var-lib-containers-storage-overlay-7dcb8947eee250dbe9f60fb951fbd348eb58c9ec4bde4bf9d86314d783604ef4-merged.mount: Deactivated successfully.
Dec  1 04:49:11 np0005540825 podman[85935]: 2025-12-01 09:49:11.192118571 +0000 UTC m=+0.110638897 container remove 092f39eea2fa3867ec2c0315b753acd06d5049b091b54d47f4e68b95eb3b8457 (image=quay.io/ceph/ceph:v19, name=distracted_bassi, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid)
Dec  1 04:49:11 np0005540825 systemd[1]: libpod-conmon-092f39eea2fa3867ec2c0315b753acd06d5049b091b54d47f4e68b95eb3b8457.scope: Deactivated successfully.
Dec  1 04:49:11 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_09:49:11
Dec  1 04:49:11 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 04:49:11 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 04:49:11 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'vms', '.mgr', 'cephfs.cephfs.data', 'volumes', 'images']
Dec  1 04:49:11 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 04:49:11 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 04:49:11 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec  1 04:49:11 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 04:49:11 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec  1 04:49:11 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  1 04:49:11 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec  1 04:49:11 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  1 04:49:11 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec  1 04:49:11 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  1 04:49:11 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec  1 04:49:11 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  1 04:49:11 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec  1 04:49:11 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  1 04:49:11 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec  1 04:49:11 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  1 04:49:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Dec  1 04:49:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Dec  1 04:49:11 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 04:49:11 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 04:49:11 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:49:11 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:49:11 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 04:49:11 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 04:49:11 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 04:49:11 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 04:49:11 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:49:11 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:49:11 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 04:49:11 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 04:49:11 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:49:11 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:49:11 np0005540825 python3[85975]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:49:11 np0005540825 podman[85976]: 2025-12-01 09:49:11.646202184 +0000 UTC m=+0.072368481 container create 5da3563d8f00d957c5996a656d621a784b128e4a708dfe260d72eae293fd29cb (image=quay.io/ceph/ceph:v19, name=mystifying_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  1 04:49:11 np0005540825 systemd[1]: Started libpod-conmon-5da3563d8f00d957c5996a656d621a784b128e4a708dfe260d72eae293fd29cb.scope.
Dec  1 04:49:11 np0005540825 podman[85976]: 2025-12-01 09:49:11.616411537 +0000 UTC m=+0.042577874 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:49:11 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:49:11 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/814e57fb5d5b9d0864bf5c11873c58e1ec3b3c0c4aa196fa0f4c8f108415ab6f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:11 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/814e57fb5d5b9d0864bf5c11873c58e1ec3b3c0c4aa196fa0f4c8f108415ab6f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:11 np0005540825 podman[85976]: 2025-12-01 09:49:11.753618982 +0000 UTC m=+0.179785269 container init 5da3563d8f00d957c5996a656d621a784b128e4a708dfe260d72eae293fd29cb (image=quay.io/ceph/ceph:v19, name=mystifying_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  1 04:49:11 np0005540825 podman[85976]: 2025-12-01 09:49:11.772725669 +0000 UTC m=+0.198891936 container start 5da3563d8f00d957c5996a656d621a784b128e4a708dfe260d72eae293fd29cb (image=quay.io/ceph/ceph:v19, name=mystifying_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  1 04:49:11 np0005540825 podman[85976]: 2025-12-01 09:49:11.776422669 +0000 UTC m=+0.202588976 container attach 5da3563d8f00d957c5996a656d621a784b128e4a708dfe260d72eae293fd29cb (image=quay.io/ceph/ceph:v19, name=mystifying_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  1 04:49:12 np0005540825 ceph-mon[74416]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  1 04:49:12 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/1668628802' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec  1 04:49:12 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Dec  1 04:49:12 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Dec  1 04:49:12 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec  1 04:49:12 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e27 e27: 2 total, 2 up, 2 in
Dec  1 04:49:12 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e27: 2 total, 2 up, 2 in
Dec  1 04:49:12 np0005540825 ceph-mgr[74709]: [progress INFO root] update: starting ev 4a4810ed-cfee-4af3-91dc-713518568bec (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec  1 04:49:12 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Dec  1 04:49:12 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Dec  1 04:49:12 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v80: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:49:12 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Dec  1 04:49:12 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:12 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Dec  1 04:49:12 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3828223939' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Dec  1 04:49:12 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  1 04:49:12 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:12 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  1 04:49:12 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:12 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  1 04:49:12 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:12 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  1 04:49:12 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:12 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec  1 04:49:12 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  1 04:49:12 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:49:12 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:49:12 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 04:49:12 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 04:49:12 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec  1 04:49:12 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec  1 04:49:13 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Dec  1 04:49:13 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec  1 04:49:13 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec  1 04:49:13 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3828223939' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec  1 04:49:13 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e28 e28: 2 total, 2 up, 2 in
Dec  1 04:49:13 np0005540825 mystifying_perlman[85991]: enabled application 'rbd' on pool 'images'
Dec  1 04:49:13 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec  1 04:49:13 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Dec  1 04:49:13 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:13 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/3828223939' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Dec  1 04:49:13 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:13 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:13 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:13 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:13 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  1 04:49:13 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 04:49:13 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e28: 2 total, 2 up, 2 in
Dec  1 04:49:13 np0005540825 ceph-mgr[74709]: [progress INFO root] update: starting ev 4938d042-84fa-4986-9102-91667e4b0b14 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec  1 04:49:13 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Dec  1 04:49:13 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Dec  1 04:49:13 np0005540825 systemd[1]: libpod-5da3563d8f00d957c5996a656d621a784b128e4a708dfe260d72eae293fd29cb.scope: Deactivated successfully.
Dec  1 04:49:13 np0005540825 podman[85976]: 2025-12-01 09:49:13.079624012 +0000 UTC m=+1.505790309 container died 5da3563d8f00d957c5996a656d621a784b128e4a708dfe260d72eae293fd29cb (image=quay.io/ceph/ceph:v19, name=mystifying_perlman, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 04:49:13 np0005540825 systemd[1]: var-lib-containers-storage-overlay-814e57fb5d5b9d0864bf5c11873c58e1ec3b3c0c4aa196fa0f4c8f108415ab6f-merged.mount: Deactivated successfully.
Dec  1 04:49:13 np0005540825 podman[85976]: 2025-12-01 09:49:13.121449665 +0000 UTC m=+1.547615922 container remove 5da3563d8f00d957c5996a656d621a784b128e4a708dfe260d72eae293fd29cb (image=quay.io/ceph/ceph:v19, name=mystifying_perlman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:49:13 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:49:13 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.conf
Dec  1 04:49:13 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.conf
Dec  1 04:49:13 np0005540825 systemd[1]: libpod-conmon-5da3563d8f00d957c5996a656d621a784b128e4a708dfe260d72eae293fd29cb.scope: Deactivated successfully.
Dec  1 04:49:13 np0005540825 python3[86053]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:49:13 np0005540825 podman[86054]: 2025-12-01 09:49:13.524063685 +0000 UTC m=+0.050950940 container create 45265630d5e176bac9b498f81762725fc7bddb7b75a4639806073d89aeb5ec47 (image=quay.io/ceph/ceph:v19, name=vigilant_volhard, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Dec  1 04:49:13 np0005540825 systemd[1]: Started libpod-conmon-45265630d5e176bac9b498f81762725fc7bddb7b75a4639806073d89aeb5ec47.scope.
Dec  1 04:49:13 np0005540825 podman[86054]: 2025-12-01 09:49:13.502544083 +0000 UTC m=+0.029431378 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:49:13 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:49:13 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0633df42a44db2ce972304d73941633cdd321efbb5a3829e0da36c58a5954b1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:13 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0633df42a44db2ce972304d73941633cdd321efbb5a3829e0da36c58a5954b1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:13 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  1 04:49:13 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  1 04:49:14 np0005540825 podman[86054]: 2025-12-01 09:49:14.03838239 +0000 UTC m=+0.565269735 container init 45265630d5e176bac9b498f81762725fc7bddb7b75a4639806073d89aeb5ec47 (image=quay.io/ceph/ceph:v19, name=vigilant_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  1 04:49:14 np0005540825 podman[86054]: 2025-12-01 09:49:14.043507109 +0000 UTC m=+0.570394394 container start 45265630d5e176bac9b498f81762725fc7bddb7b75a4639806073d89aeb5ec47 (image=quay.io/ceph/ceph:v19, name=vigilant_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  1 04:49:14 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v82: 38 pgs: 31 unknown, 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:49:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Dec  1 04:49:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Dec  1 04:49:14 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.client.admin.keyring
Dec  1 04:49:14 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.client.admin.keyring
Dec  1 04:49:14 np0005540825 podman[86054]: 2025-12-01 09:49:14.529892687 +0000 UTC m=+1.056779992 container attach 45265630d5e176bac9b498f81762725fc7bddb7b75a4639806073d89aeb5ec47 (image=quay.io/ceph/ceph:v19, name=vigilant_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:49:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec  1 04:49:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec  1 04:49:14 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 28 pg[2.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=28 pruub=14.937423706s) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active pruub 61.910240173s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e29 e29: 2 total, 2 up, 2 in
Dec  1 04:49:14 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 28 pg[2.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=28 pruub=14.937423706s) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown pruub 61.910240173s@ mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:14 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e29: 2 total, 2 up, 2 in
Dec  1 04:49:14 np0005540825 ceph-mgr[74709]: [progress INFO root] update: starting ev d54568f7-91c4-4d2e-8aa2-49d9deba552e (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec  1 04:49:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Dec  1 04:49:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Dec  1 04:49:14 np0005540825 ceph-mon[74416]: Updating compute-2:/etc/ceph/ceph.conf
Dec  1 04:49:14 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec  1 04:49:14 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec  1 04:49:14 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/3828223939' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec  1 04:49:14 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Dec  1 04:49:14 np0005540825 ceph-mon[74416]: Updating compute-2:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.conf
Dec  1 04:49:14 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:14 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 29 pg[2.1f( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:14 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 29 pg[2.1e( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:14 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 29 pg[2.1d( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:14 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 29 pg[2.1c( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:14 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 29 pg[2.9( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:14 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 29 pg[2.a( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:14 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 29 pg[2.8( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:14 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 29 pg[2.7( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:14 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 29 pg[2.4( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:14 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 29 pg[2.6( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:14 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 29 pg[2.2( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:14 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 29 pg[2.5( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:14 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 29 pg[2.1( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:14 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 29 pg[2.3( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:14 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 29 pg[2.b( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:14 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 29 pg[2.c( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:14 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 29 pg[2.d( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:14 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 29 pg[2.f( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:14 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 29 pg[2.e( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:14 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 29 pg[2.10( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:14 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 29 pg[2.12( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:14 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 29 pg[2.13( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:14 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 29 pg[2.11( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:14 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 29 pg[2.14( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:14 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 29 pg[2.16( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:14 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 29 pg[2.1b( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:14 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 29 pg[2.17( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:14 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 29 pg[2.15( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:14 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 29 pg[2.18( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:14 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 29 pg[2.1a( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:14 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 29 pg[2.19( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  1 04:49:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  1 04:49:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 04:49:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:14 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v84: 69 pgs: 32 peering, 31 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:49:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Dec  1 04:49:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:14 np0005540825 ceph-mgr[74709]: [progress INFO root] update: starting ev ec5ed33b-a956-4096-8c36-b6a4b2cfb306 (Updating mon deployment (+2 -> 3))
Dec  1 04:49:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec  1 04:49:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  1 04:49:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec  1 04:49:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec  1 04:49:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:49:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:49:14 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Dec  1 04:49:14 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Dec  1 04:49:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Dec  1 04:49:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3663653222' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Dec  1 04:49:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Dec  1 04:49:15 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec  1 04:49:15 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec  1 04:49:15 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3663653222' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec  1 04:49:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e30 e30: 2 total, 2 up, 2 in
Dec  1 04:49:15 np0005540825 vigilant_volhard[86069]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Dec  1 04:49:15 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e30: 2 total, 2 up, 2 in
Dec  1 04:49:15 np0005540825 ceph-mgr[74709]: [progress INFO root] update: starting ev 58abb829-3afb-40cd-8ac1-b7b55a66e74e (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec  1 04:49:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0)
Dec  1 04:49:15 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec  1 04:49:15 np0005540825 ceph-mon[74416]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  1 04:49:15 np0005540825 ceph-mon[74416]: Updating compute-2:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.client.admin.keyring
Dec  1 04:49:15 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec  1 04:49:15 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec  1 04:49:15 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Dec  1 04:49:15 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:15 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:15 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:15 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:15 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  1 04:49:15 np0005540825 ceph-mon[74416]: Deploying daemon mon.compute-2 on compute-2
Dec  1 04:49:15 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/3663653222' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Dec  1 04:49:15 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec  1 04:49:15 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec  1 04:49:15 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/3663653222' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec  1 04:49:15 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec  1 04:49:15 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 30 pg[2.1d( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:15 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 30 pg[2.1e( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:15 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 30 pg[2.a( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:15 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 30 pg[2.1f( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:15 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 30 pg[2.1b( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:15 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 30 pg[2.7( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:15 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 30 pg[2.8( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:15 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 30 pg[2.1c( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:15 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 30 pg[2.6( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:15 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 30 pg[2.4( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:15 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 30 pg[2.0( empty local-lis/les=28/30 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:15 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 30 pg[2.1( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:15 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 30 pg[2.2( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:15 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 30 pg[2.3( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:15 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 30 pg[2.5( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:15 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 30 pg[2.d( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:15 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 30 pg[2.b( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:15 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 30 pg[2.f( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:15 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 30 pg[2.c( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:15 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 30 pg[2.e( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:15 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 30 pg[2.10( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:15 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 30 pg[2.14( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:15 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 30 pg[2.16( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:15 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 30 pg[2.9( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:15 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 30 pg[2.13( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:15 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 30 pg[2.17( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:15 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 30 pg[2.12( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:15 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 30 pg[2.18( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:15 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 30 pg[2.15( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:15 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 30 pg[2.1a( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:15 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 30 pg[2.19( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:15 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 30 pg[2.11( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [1] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:15 np0005540825 systemd[1]: libpod-45265630d5e176bac9b498f81762725fc7bddb7b75a4639806073d89aeb5ec47.scope: Deactivated successfully.
Dec  1 04:49:15 np0005540825 podman[86054]: 2025-12-01 09:49:15.568797383 +0000 UTC m=+2.095684678 container died 45265630d5e176bac9b498f81762725fc7bddb7b75a4639806073d89aeb5ec47 (image=quay.io/ceph/ceph:v19, name=vigilant_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:49:15 np0005540825 systemd[1]: var-lib-containers-storage-overlay-a0633df42a44db2ce972304d73941633cdd321efbb5a3829e0da36c58a5954b1-merged.mount: Deactivated successfully.
Dec  1 04:49:15 np0005540825 podman[86054]: 2025-12-01 09:49:15.622573579 +0000 UTC m=+2.149460834 container remove 45265630d5e176bac9b498f81762725fc7bddb7b75a4639806073d89aeb5ec47 (image=quay.io/ceph/ceph:v19, name=vigilant_volhard, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:49:15 np0005540825 systemd[1]: libpod-conmon-45265630d5e176bac9b498f81762725fc7bddb7b75a4639806073d89aeb5ec47.scope: Deactivated successfully.
Dec  1 04:49:15 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Dec  1 04:49:15 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Dec  1 04:49:15 np0005540825 ceph-mon[74416]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Dec  1 04:49:15 np0005540825 python3[86130]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:49:16 np0005540825 podman[86131]: 2025-12-01 09:49:16.07099282 +0000 UTC m=+0.070317365 container create 9b693113d694ac68ea7834cb0b2b110c6242fd6c31d5a37b8974a1d048092c9a (image=quay.io/ceph/ceph:v19, name=quirky_galois, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  1 04:49:16 np0005540825 systemd[1]: Started libpod-conmon-9b693113d694ac68ea7834cb0b2b110c6242fd6c31d5a37b8974a1d048092c9a.scope.
Dec  1 04:49:16 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:49:16 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1753c3615e70937fd2a808b53f5ee41c5ab4387235e8c8112a55f05a6c6285c6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:16 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1753c3615e70937fd2a808b53f5ee41c5ab4387235e8c8112a55f05a6c6285c6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:16 np0005540825 podman[86131]: 2025-12-01 09:49:16.048554393 +0000 UTC m=+0.047878718 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:49:16 np0005540825 podman[86131]: 2025-12-01 09:49:16.162186559 +0000 UTC m=+0.161510934 container init 9b693113d694ac68ea7834cb0b2b110c6242fd6c31d5a37b8974a1d048092c9a (image=quay.io/ceph/ceph:v19, name=quirky_galois, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:49:16 np0005540825 podman[86131]: 2025-12-01 09:49:16.171689496 +0000 UTC m=+0.171013831 container start 9b693113d694ac68ea7834cb0b2b110c6242fd6c31d5a37b8974a1d048092c9a (image=quay.io/ceph/ceph:v19, name=quirky_galois, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:49:16 np0005540825 podman[86131]: 2025-12-01 09:49:16.175717765 +0000 UTC m=+0.175042120 container attach 9b693113d694ac68ea7834cb0b2b110c6242fd6c31d5a37b8974a1d048092c9a (image=quay.io/ceph/ceph:v19, name=quirky_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  1 04:49:16 np0005540825 ceph-mgr[74709]: [progress WARNING root] Starting Global Recovery Event,94 pgs not in active + clean state
Dec  1 04:49:16 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Dec  1 04:49:16 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/762968888' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Dec  1 04:49:16 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Dec  1 04:49:16 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Dec  1 04:49:16 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/762968888' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec  1 04:49:16 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e31 e31: 2 total, 2 up, 2 in
Dec  1 04:49:16 np0005540825 quirky_galois[86146]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Dec  1 04:49:16 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e31: 2 total, 2 up, 2 in
Dec  1 04:49:16 np0005540825 ceph-mgr[74709]: [progress INFO root] update: starting ev 13cb063e-a517-4f02-9547-a11e1b9168b7 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Dec  1 04:49:16 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Dec  1 04:49:16 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Dec  1 04:49:16 np0005540825 ceph-mon[74416]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Dec  1 04:49:16 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/762968888' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Dec  1 04:49:16 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Dec  1 04:49:16 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/762968888' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec  1 04:49:16 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Dec  1 04:49:16 np0005540825 systemd[1]: libpod-9b693113d694ac68ea7834cb0b2b110c6242fd6c31d5a37b8974a1d048092c9a.scope: Deactivated successfully.
Dec  1 04:49:16 np0005540825 podman[86131]: 2025-12-01 09:49:16.571572483 +0000 UTC m=+0.570896808 container died 9b693113d694ac68ea7834cb0b2b110c6242fd6c31d5a37b8974a1d048092c9a (image=quay.io/ceph/ceph:v19, name=quirky_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  1 04:49:16 np0005540825 systemd[1]: var-lib-containers-storage-overlay-1753c3615e70937fd2a808b53f5ee41c5ab4387235e8c8112a55f05a6c6285c6-merged.mount: Deactivated successfully.
Dec  1 04:49:16 np0005540825 systemd[75739]: Starting Mark boot as successful...
Dec  1 04:49:16 np0005540825 podman[86131]: 2025-12-01 09:49:16.619930488 +0000 UTC m=+0.619254783 container remove 9b693113d694ac68ea7834cb0b2b110c6242fd6c31d5a37b8974a1d048092c9a (image=quay.io/ceph/ceph:v19, name=quirky_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  1 04:49:16 np0005540825 systemd[75739]: Finished Mark boot as successful.
Dec  1 04:49:16 np0005540825 systemd[1]: libpod-conmon-9b693113d694ac68ea7834cb0b2b110c6242fd6c31d5a37b8974a1d048092c9a.scope: Deactivated successfully.
Dec  1 04:49:16 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Dec  1 04:49:16 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Dec  1 04:49:16 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v87: 100 pgs: 32 peering, 62 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:49:16 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0)
Dec  1 04:49:16 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:16 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Dec  1 04:49:16 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:17 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  1 04:49:17 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Dec  1 04:49:17 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Dec  1 04:49:17 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:17 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  1 04:49:17 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:17 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec  1 04:49:17 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:17 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec  1 04:49:17 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  1 04:49:17 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec  1 04:49:17 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec  1 04:49:17 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:49:17 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:49:17 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Dec  1 04:49:17 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Dec  1 04:49:17 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Dec  1 04:49:17 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Dec  1 04:49:17 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3110694739; not ready for session (expect reconnect)
Dec  1 04:49:17 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  1 04:49:17 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  1 04:49:17 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Dec  1 04:49:17 np0005540825 ceph-mon[74416]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Dec  1 04:49:17 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Dec  1 04:49:17 np0005540825 ceph-mon[74416]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0)
Dec  1 04:49:17 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:17 np0005540825 ceph-mon[74416]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Dec  1 04:49:17 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:17 np0005540825 ceph-mon[74416]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  1 04:49:17 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  1 04:49:17 np0005540825 ceph-mon[74416]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  1 04:49:17 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  1 04:49:17 np0005540825 ceph-mon[74416]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Dec  1 04:49:17 np0005540825 ceph-mon[74416]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Dec  1 04:49:17 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec  1 04:49:17 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  1 04:49:17 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 2.1f deep-scrub starts
Dec  1 04:49:17 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 2.1f deep-scrub ok
Dec  1 04:49:17 np0005540825 python3[86259]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 04:49:18 np0005540825 python3[86330]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764582557.377341-37256-8878818398863/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:49:18 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3110694739; not ready for session (expect reconnect)
Dec  1 04:49:18 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  1 04:49:18 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  1 04:49:18 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec  1 04:49:18 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Dec  1 04:49:18 np0005540825 python3[86432]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 04:49:18 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Dec  1 04:49:18 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v88: 100 pgs: 32 peering, 62 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:49:18 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0)
Dec  1 04:49:18 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:18 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Dec  1 04:49:18 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:19 np0005540825 python3[86507]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764582558.3526468-37270-47130778954181/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=f12619a19fe45688cf79ffc88b49812db631e487 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:49:19 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3110694739; not ready for session (expect reconnect)
Dec  1 04:49:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  1 04:49:19 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  1 04:49:19 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec  1 04:49:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec  1 04:49:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  1 04:49:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec  1 04:49:19 np0005540825 python3[86557]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:49:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec  1 04:49:19 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3934606873; not ready for session (expect reconnect)
Dec  1 04:49:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  1 04:49:19 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  1 04:49:19 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec  1 04:49:19 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Dec  1 04:49:19 np0005540825 podman[86558]: 2025-12-01 09:49:19.644646405 +0000 UTC m=+0.064737416 container create 9da097435560183df4b70cc51773c32a3be425bc4b9e622b5728b9f1ca4387fa (image=quay.io/ceph/ceph:v19, name=blissful_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:49:19 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Dec  1 04:49:19 np0005540825 systemd[1]: Started libpod-conmon-9da097435560183df4b70cc51773c32a3be425bc4b9e622b5728b9f1ca4387fa.scope.
Dec  1 04:49:19 np0005540825 podman[86558]: 2025-12-01 09:49:19.617397093 +0000 UTC m=+0.037488154 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:49:19 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:49:19 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e27f5d1709fa2c2ac3add360d44cd23ad63909115fc85704fa5e8024370f624/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:19 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e27f5d1709fa2c2ac3add360d44cd23ad63909115fc85704fa5e8024370f624/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:19 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e27f5d1709fa2c2ac3add360d44cd23ad63909115fc85704fa5e8024370f624/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:19 np0005540825 podman[86558]: 2025-12-01 09:49:19.772697027 +0000 UTC m=+0.192788118 container init 9da097435560183df4b70cc51773c32a3be425bc4b9e622b5728b9f1ca4387fa (image=quay.io/ceph/ceph:v19, name=blissful_leavitt, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec  1 04:49:19 np0005540825 podman[86558]: 2025-12-01 09:49:19.787247502 +0000 UTC m=+0.207338523 container start 9da097435560183df4b70cc51773c32a3be425bc4b9e622b5728b9f1ca4387fa (image=quay.io/ceph/ceph:v19, name=blissful_leavitt, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:49:19 np0005540825 podman[86558]: 2025-12-01 09:49:19.791742011 +0000 UTC m=+0.211833023 container attach 9da097435560183df4b70cc51773c32a3be425bc4b9e622b5728b9f1ca4387fa (image=quay.io/ceph/ceph:v19, name=blissful_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  1 04:49:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec  1 04:49:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec  1 04:49:20 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3110694739; not ready for session (expect reconnect)
Dec  1 04:49:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  1 04:49:20 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  1 04:49:20 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec  1 04:49:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec  1 04:49:20 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3934606873; not ready for session (expect reconnect)
Dec  1 04:49:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  1 04:49:20 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  1 04:49:20 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec  1 04:49:20 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 2.8 deep-scrub starts
Dec  1 04:49:20 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 2.8 deep-scrub ok
Dec  1 04:49:20 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v89: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:49:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0)
Dec  1 04:49:20 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Dec  1 04:49:20 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  1 04:49:20 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  1 04:49:20 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  1 04:49:20 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:21 np0005540825 ceph-mgr[74709]: [progress INFO root] Completed event d55b1da1-e81e-4e04-95ff-fe7b2174b642 (Global Recovery Event) in 5 seconds
Dec  1 04:49:21 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec  1 04:49:21 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3110694739; not ready for session (expect reconnect)
Dec  1 04:49:21 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  1 04:49:21 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  1 04:49:21 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec  1 04:49:21 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec  1 04:49:21 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3934606873; not ready for session (expect reconnect)
Dec  1 04:49:21 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  1 04:49:21 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  1 04:49:21 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec  1 04:49:21 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Dec  1 04:49:21 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Dec  1 04:49:22 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3110694739; not ready for session (expect reconnect)
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  1 04:49:22 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  1 04:49:22 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3934606873; not ready for session (expect reconnect)
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  1 04:49:22 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 2.a scrub starts
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : monmap epoch 2
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : fsid 365f19c2-81e5-5edd-b6b4-280555214d3a
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : last_changed 2025-12-01T09:49:17.408437+0000
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : created 2025-12-01T09:46:48.019470+0000
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : fsmap 
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e31: 2 total, 2 up, 2 in
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.fospow(active, since 2m)
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 2 pool(s) do not have an application enabled
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(cluster) log [WRN] : [WRN] POOL_APP_NOT_ENABLED: 2 pool(s) do not have an application enabled
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(cluster) log [WRN] :     application not enabled on pool 'cephfs.cephfs.meta'
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(cluster) log [WRN] :     application not enabled on pool 'cephfs.cephfs.data'
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(cluster) log [WRN] :     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 2.a scrub ok
Dec  1 04:49:22 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v90: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0)
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:22 np0005540825 ceph-mgr[74709]: [progress INFO root] complete: finished ev ec5ed33b-a956-4096-8c36-b6a4b2cfb306 (Updating mon deployment (+2 -> 3))
Dec  1 04:49:22 np0005540825 ceph-mgr[74709]: [progress INFO root] Completed event ec5ed33b-a956-4096-8c36-b6a4b2cfb306 (Updating mon deployment (+2 -> 3)) in 8 seconds
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:22 np0005540825 ceph-mgr[74709]: [progress INFO root] update: starting ev 9245d59f-af61-4484-ae95-66fd7cc73ffa (Updating mgr deployment (+2 -> 3))
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.kdtkls", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.kdtkls", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.kdtkls", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:49:22 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.kdtkls on compute-2
Dec  1 04:49:22 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.kdtkls on compute-2
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e32 e32: 2 total, 2 up, 2 in
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: Deploying daemon mon.compute-1 on compute-1
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: mon.compute-0 calling monitor election
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: mon.compute-2 calling monitor election
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: Health detail: HEALTH_WARN 2 pool(s) do not have an application enabled
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: [WRN] POOL_APP_NOT_ENABLED: 2 pool(s) do not have an application enabled
Dec  1 04:49:22 np0005540825 ceph-mon[74416]:    application not enabled on pool 'cephfs.cephfs.meta'
Dec  1 04:49:22 np0005540825 ceph-mon[74416]:    application not enabled on pool 'cephfs.cephfs.data'
Dec  1 04:49:22 np0005540825 ceph-mon[74416]:    use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.kdtkls", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[2.1f( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=32 pruub=8.764305115s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 63.996150970s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[2.1e( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=32 pruub=8.764245987s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 63.996150970s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[2.1f( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=32 pruub=8.764219284s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.996150970s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[2.1b( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=32 pruub=8.764239311s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 63.996215820s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[2.a( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=32 pruub=8.764198303s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 63.996181488s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[2.1e( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=32 pruub=8.764178276s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.996150970s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[2.a( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=32 pruub=8.764162064s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.996181488s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[2.1b( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=32 pruub=8.764191628s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.996215820s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[2.9( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=32 pruub=8.770548820s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 64.002830505s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[2.6( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=32 pruub=8.769872665s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 64.002235413s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[2.9( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=32 pruub=8.770446777s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.002830505s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[2.6( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=32 pruub=8.769847870s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.002235413s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[2.1( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=32 pruub=8.769823074s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 64.002403259s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[2.c( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=32 pruub=8.769983292s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 64.002624512s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[2.4( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=32 pruub=8.769693375s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 64.002334595s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[2.d( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=32 pruub=8.769917488s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 64.002578735s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[2.1( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=32 pruub=8.769788742s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.002403259s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[2.c( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=32 pruub=8.769958496s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.002624512s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[2.d( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=32 pruub=8.769894600s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.002578735s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[2.4( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=32 pruub=8.769651413s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.002334595s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[2.e( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=32 pruub=8.769647598s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 64.002677917s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[2.e( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=32 pruub=8.769610405s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.002677917s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[2.13( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=32 pruub=8.769629478s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 64.002838135s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[2.15( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=32 pruub=8.769629478s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 64.002868652s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[2.15( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=32 pruub=8.769597054s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.002868652s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[2.19( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=32 pruub=8.769468307s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 64.002967834s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[2.19( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=32 pruub=8.769447327s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.002967834s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[2.13( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=32 pruub=8.769609451s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.002838135s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[2.10( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=32 pruub=8.769399643s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active pruub 64.002693176s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[2.10( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=32 pruub=8.769093513s) [0] r=-1 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.002693176s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:22 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e32: 2 total, 2 up, 2 in
Dec  1 04:49:22 np0005540825 ceph-mgr[74709]: [progress INFO root] update: starting ev 9b19c30e-75a9-472f-881e-3ffb4e3f29f2 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec  1 04:49:22 np0005540825 ceph-mgr[74709]: [progress INFO root] complete: finished ev 4a4810ed-cfee-4af3-91dc-713518568bec (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec  1 04:49:22 np0005540825 ceph-mgr[74709]: [progress INFO root] Completed event 4a4810ed-cfee-4af3-91dc-713518568bec (PG autoscaler increasing pool 2 PGs from 1 to 32) in 11 seconds
Dec  1 04:49:22 np0005540825 ceph-mgr[74709]: [progress INFO root] complete: finished ev 4938d042-84fa-4986-9102-91667e4b0b14 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec  1 04:49:22 np0005540825 ceph-mgr[74709]: [progress INFO root] Completed event 4938d042-84fa-4986-9102-91667e4b0b14 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 10 seconds
Dec  1 04:49:22 np0005540825 ceph-mgr[74709]: [progress INFO root] complete: finished ev d54568f7-91c4-4d2e-8aa2-49d9deba552e (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec  1 04:49:22 np0005540825 ceph-mgr[74709]: [progress INFO root] Completed event d54568f7-91c4-4d2e-8aa2-49d9deba552e (PG autoscaler increasing pool 4 PGs from 1 to 32) in 8 seconds
Dec  1 04:49:22 np0005540825 ceph-mgr[74709]: [progress INFO root] complete: finished ev 58abb829-3afb-40cd-8ac1-b7b55a66e74e (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec  1 04:49:22 np0005540825 ceph-mgr[74709]: [progress INFO root] Completed event 58abb829-3afb-40cd-8ac1-b7b55a66e74e (PG autoscaler increasing pool 5 PGs from 1 to 32) in 7 seconds
Dec  1 04:49:22 np0005540825 ceph-mgr[74709]: [progress INFO root] complete: finished ev 13cb063e-a517-4f02-9547-a11e1b9168b7 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Dec  1 04:49:22 np0005540825 ceph-mgr[74709]: [progress INFO root] Completed event 13cb063e-a517-4f02-9547-a11e1b9168b7 (PG autoscaler increasing pool 6 PGs from 1 to 32) in 6 seconds
Dec  1 04:49:22 np0005540825 ceph-mgr[74709]: [progress INFO root] complete: finished ev 9b19c30e-75a9-472f-881e-3ffb4e3f29f2 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec  1 04:49:22 np0005540825 ceph-mgr[74709]: [progress INFO root] Completed event 9b19c30e-75a9-472f-881e-3ffb4e3f29f2 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[4.18( empty local-lis/les=0/0 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=32) [1] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[3.1d( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[4.1a( empty local-lis/les=0/0 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=32) [1] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[3.1c( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[4.1b( empty local-lis/les=0/0 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=32) [1] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[3.1a( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[4.e( empty local-lis/les=0/0 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=32) [1] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[3.9( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[3.3( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[4.5( empty local-lis/les=0/0 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=32) [1] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[3.5( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[4.1( empty local-lis/les=0/0 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=32) [1] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[4.d( empty local-lis/les=0/0 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=32) [1] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[4.c( empty local-lis/les=0/0 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=32) [1] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[3.a( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[4.a( empty local-lis/les=0/0 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=32) [1] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[3.c( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[3.d( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[4.9( empty local-lis/les=0/0 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=32) [1] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[3.e( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[3.f( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[4.8( empty local-lis/les=0/0 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=32) [1] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[3.10( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[4.15( empty local-lis/les=0/0 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=32) [1] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[3.11( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[4.13( empty local-lis/les=0/0 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=32) [1] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[3.13( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[3.14( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[3.15( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[3.16( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 32 pg[4.1f( empty local-lis/les=0/0 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=32) [1] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:23 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:49:23 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec  1 04:49:23 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3215176106' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  1 04:49:23 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3110694739; not ready for session (expect reconnect)
Dec  1 04:49:23 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  1 04:49:23 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  1 04:49:23 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3215176106' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec  1 04:49:23 np0005540825 blissful_leavitt[86573]: 
Dec  1 04:49:23 np0005540825 blissful_leavitt[86573]: [global]
Dec  1 04:49:23 np0005540825 blissful_leavitt[86573]: #011fsid = 365f19c2-81e5-5edd-b6b4-280555214d3a
Dec  1 04:49:23 np0005540825 blissful_leavitt[86573]: #011mon_host = 192.168.122.100
Dec  1 04:49:23 np0005540825 systemd[1]: libpod-9da097435560183df4b70cc51773c32a3be425bc4b9e622b5728b9f1ca4387fa.scope: Deactivated successfully.
Dec  1 04:49:23 np0005540825 podman[86558]: 2025-12-01 09:49:23.539854201 +0000 UTC m=+3.959945212 container died 9da097435560183df4b70cc51773c32a3be425bc4b9e622b5728b9f1ca4387fa (image=quay.io/ceph/ceph:v19, name=blissful_leavitt, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:49:23 np0005540825 systemd[1]: var-lib-containers-storage-overlay-1e27f5d1709fa2c2ac3add360d44cd23ad63909115fc85704fa5e8024370f624-merged.mount: Deactivated successfully.
Dec  1 04:49:23 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec  1 04:49:23 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Dec  1 04:49:23 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3934606873; not ready for session (expect reconnect)
Dec  1 04:49:23 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Dec  1 04:49:23 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Dec  1 04:49:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  1 04:49:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  1 04:49:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Dec  1 04:49:24 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec  1 04:49:24 np0005540825 podman[86558]: 2025-12-01 09:49:24.061477457 +0000 UTC m=+4.481568428 container remove 9da097435560183df4b70cc51773c32a3be425bc4b9e622b5728b9f1ca4387fa (image=quay.io/ceph/ceph:v19, name=blissful_leavitt, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default)
Dec  1 04:49:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  1 04:49:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  1 04:49:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  1 04:49:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  1 04:49:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  1 04:49:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  1 04:49:24 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec  1 04:49:24 np0005540825 ceph-mon[74416]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Dec  1 04:49:24 np0005540825 ceph-mon[74416]: paxos.0).electionLogic(10) init, last seen epoch 10
Dec  1 04:49:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  1 04:49:24 np0005540825 systemd[1]: libpod-conmon-9da097435560183df4b70cc51773c32a3be425bc4b9e622b5728b9f1ca4387fa.scope: Deactivated successfully.
Dec  1 04:49:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  1 04:49:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  1 04:49:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:49:24.411+0000 7f9871923640 -1 mgr.server handle_report got status from non-daemon mon.compute-2
Dec  1 04:49:24 np0005540825 ceph-mgr[74709]: mgr.server handle_report got status from non-daemon mon.compute-2
Dec  1 04:49:24 np0005540825 python3[86636]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:49:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  1 04:49:24 np0005540825 podman[86637]: 2025-12-01 09:49:24.595970455 +0000 UTC m=+0.081677835 container create 87006302448e8bacdb63338b2da419563048a9178b91bceab705872e6f157000 (image=quay.io/ceph/ceph:v19, name=gracious_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1)
Dec  1 04:49:24 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3934606873; not ready for session (expect reconnect)
Dec  1 04:49:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  1 04:49:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  1 04:49:24 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec  1 04:49:24 np0005540825 systemd[1]: Started libpod-conmon-87006302448e8bacdb63338b2da419563048a9178b91bceab705872e6f157000.scope.
Dec  1 04:49:24 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Dec  1 04:49:24 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Dec  1 04:49:24 np0005540825 podman[86637]: 2025-12-01 09:49:24.566962317 +0000 UTC m=+0.052669777 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:49:24 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:49:24 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cade681de08582e9410147a8b7a7ae77b7b78c4c53efe0ef3a77abf77b296bf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:24 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cade681de08582e9410147a8b7a7ae77b7b78c4c53efe0ef3a77abf77b296bf/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:24 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cade681de08582e9410147a8b7a7ae77b7b78c4c53efe0ef3a77abf77b296bf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:24 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v92: 162 pgs: 48 peering, 62 unknown, 52 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:49:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Dec  1 04:49:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:24 np0005540825 podman[86637]: 2025-12-01 09:49:24.735616554 +0000 UTC m=+0.221324044 container init 87006302448e8bacdb63338b2da419563048a9178b91bceab705872e6f157000 (image=quay.io/ceph/ceph:v19, name=gracious_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  1 04:49:24 np0005540825 podman[86637]: 2025-12-01 09:49:24.740805391 +0000 UTC m=+0.226512771 container start 87006302448e8bacdb63338b2da419563048a9178b91bceab705872e6f157000 (image=quay.io/ceph/ceph:v19, name=gracious_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:49:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  1 04:49:24 np0005540825 podman[86637]: 2025-12-01 09:49:24.905627737 +0000 UTC m=+0.391335137 container attach 87006302448e8bacdb63338b2da419563048a9178b91bceab705872e6f157000 (image=quay.io/ceph/ceph:v19, name=gracious_brown, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:49:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  1 04:49:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  1 04:49:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  1 04:49:25 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3934606873; not ready for session (expect reconnect)
Dec  1 04:49:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  1 04:49:25 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  1 04:49:25 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec  1 04:49:25 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Dec  1 04:49:25 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Dec  1 04:49:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  1 04:49:26 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  1 04:49:26 np0005540825 ceph-mgr[74709]: [progress INFO root] Writing back 10 completed events
Dec  1 04:49:26 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  1 04:49:26 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3934606873; not ready for session (expect reconnect)
Dec  1 04:49:26 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  1 04:49:26 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  1 04:49:26 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec  1 04:49:26 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Dec  1 04:49:26 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Dec  1 04:49:26 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v93: 162 pgs: 48 peering, 62 unknown, 52 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:49:26 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Dec  1 04:49:26 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:27 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  1 04:49:27 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  1 04:49:27 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3934606873; not ready for session (expect reconnect)
Dec  1 04:49:27 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  1 04:49:27 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  1 04:49:27 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec  1 04:49:27 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 2.b scrub starts
Dec  1 04:49:27 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 2.b scrub ok
Dec  1 04:49:27 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  1 04:49:27 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  1 04:49:28 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  1 04:49:28 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  1 04:49:28 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3934606873; not ready for session (expect reconnect)
Dec  1 04:49:28 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  1 04:49:28 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  1 04:49:28 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec  1 04:49:28 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 2.f scrub starts
Dec  1 04:49:28 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 2.f scrub ok
Dec  1 04:49:28 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v94: 162 pgs: 48 peering, 62 unknown, 52 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:49:28 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Dec  1 04:49:28 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:28 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: paxos.0).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : monmap epoch 3
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : fsid 365f19c2-81e5-5edd-b6b4-280555214d3a
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : last_changed 2025-12-01T09:49:23.596118+0000
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : created 2025-12-01T09:46:48.019470+0000
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : fsmap 
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e32: 2 total, 2 up, 2 in
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.fospow(active, since 2m)
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec  1 04:49:29 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3934606873; not ready for session (expect reconnect)
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  1 04:49:29 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  1 04:49:29 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 2.11 deep-scrub starts
Dec  1 04:49:29 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 2.11 deep-scrub ok
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e33 e33: 2 total, 2 up, 2 in
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: mon.compute-0 calling monitor election
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: mon.compute-2 calling monitor election
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: overall HEALTH_OK
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e33: 2 total, 2 up, 2 in
Dec  1 04:49:29 np0005540825 ceph-mgr[74709]: [progress WARNING root] Starting Global Recovery Event,141 pgs not in active + clean state
Dec  1 04:49:29 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 33 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=33 pruub=8.839812279s) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active pruub 70.996467590s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:29 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 33 pg[4.1f( empty local-lis/les=32/33 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=32) [1] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:29 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 33 pg[3.16( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:29 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 33 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=33 pruub=8.839812279s) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown pruub 70.996467590s@ mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:29 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 33 pg[4.15( empty local-lis/les=32/33 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=32) [1] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:29 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 33 pg[3.10( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:29 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 33 pg[3.14( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:29 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 33 pg[3.15( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:29 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 33 pg[3.11( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:29 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 33 pg[3.13( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:29 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 33 pg[4.9( empty local-lis/les=32/33 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=32) [1] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:29 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 33 pg[3.e( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:29 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 33 pg[3.f( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:29 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 33 pg[4.8( empty local-lis/les=32/33 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=32) [1] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:29 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 33 pg[3.c( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:29 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 33 pg[4.a( empty local-lis/les=32/33 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=32) [1] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:29 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 33 pg[3.d( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:29 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 33 pg[3.a( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:29 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 33 pg[4.d( empty local-lis/les=32/33 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=32) [1] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:29 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 33 pg[4.13( empty local-lis/les=32/33 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=32) [1] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:29 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 33 pg[3.3( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:29 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 33 pg[4.5( empty local-lis/les=32/33 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=32) [1] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:29 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 33 pg[3.5( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:29 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 33 pg[4.e( empty local-lis/les=32/33 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=32) [1] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:29 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 33 pg[4.1( empty local-lis/les=32/33 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=32) [1] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:29 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 33 pg[4.c( empty local-lis/les=32/33 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=32) [1] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:29 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 33 pg[3.9( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:29 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 33 pg[3.1a( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:29 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 33 pg[4.1a( empty local-lis/les=32/33 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=32) [1] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:29 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 33 pg[3.1d( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:29 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 33 pg[3.1c( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=29/29 les/c/f=30/30/0 sis=32) [1] r=0 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:29 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 33 pg[4.18( empty local-lis/les=32/33 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=32) [1] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:29 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 33 pg[4.1b( empty local-lis/les=32/33 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=32) [1] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.ymizfm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.ymizfm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.ymizfm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:49:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:49:29 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.ymizfm on compute-1
Dec  1 04:49:29 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.ymizfm on compute-1
Dec  1 04:49:30 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3934606873; not ready for session (expect reconnect)
Dec  1 04:49:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  1 04:49:30 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  1 04:49:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Dec  1 04:49:30 np0005540825 ceph-mon[74416]: mon.compute-1 calling monitor election
Dec  1 04:49:30 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:30 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec  1 04:49:30 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec  1 04:49:30 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec  1 04:49:30 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:30 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:30 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:30 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.ymizfm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  1 04:49:30 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.ymizfm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec  1 04:49:30 np0005540825 ceph-mon[74416]: Deploying daemon mgr.compute-1.ymizfm on compute-1
Dec  1 04:49:30 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v96: 193 pgs: 48 peering, 93 unknown, 52 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:49:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e34 e34: 2 total, 2 up, 2 in
Dec  1 04:49:30 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e34: 2 total, 2 up, 2 in
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.1f( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.1c( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.1d( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.12( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.13( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.10( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.11( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.16( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.17( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.14( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.15( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.a( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.b( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.8( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.9( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.e( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.6( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.5( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.7( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.4( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.2( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.3( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.d( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.f( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.c( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.1e( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.1( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.19( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.18( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.1b( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.1a( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.1f( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.1c( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.12( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.1d( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.10( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.17( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.11( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.16( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.15( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.a( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.8( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.b( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.9( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.14( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.13( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.e( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.0( empty local-lis/les=33/34 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.6( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.5( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.4( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.7( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.2( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.d( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.f( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.3( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.1e( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.c( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.18( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.1( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.19( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.1b( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 34 pg[7.1a( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [1] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:31 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Dec  1 04:49:31 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1098136028' entity='client.admin' 
Dec  1 04:49:31 np0005540825 gracious_brown[86652]: set ssl_option
Dec  1 04:49:31 np0005540825 systemd[1]: libpod-87006302448e8bacdb63338b2da419563048a9178b91bceab705872e6f157000.scope: Deactivated successfully.
Dec  1 04:49:31 np0005540825 podman[86637]: 2025-12-01 09:49:31.271539366 +0000 UTC m=+6.757246776 container died 87006302448e8bacdb63338b2da419563048a9178b91bceab705872e6f157000 (image=quay.io/ceph/ceph:v19, name=gracious_brown, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:49:31 np0005540825 systemd[1]: var-lib-containers-storage-overlay-0cade681de08582e9410147a8b7a7ae77b7b78c4c53efe0ef3a77abf77b296bf-merged.mount: Deactivated successfully.
Dec  1 04:49:31 np0005540825 podman[86637]: 2025-12-01 09:49:31.307515279 +0000 UTC m=+6.793222659 container remove 87006302448e8bacdb63338b2da419563048a9178b91bceab705872e6f157000 (image=quay.io/ceph/ceph:v19, name=gracious_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 04:49:31 np0005540825 systemd[1]: libpod-conmon-87006302448e8bacdb63338b2da419563048a9178b91bceab705872e6f157000.scope: Deactivated successfully.
Dec  1 04:49:31 np0005540825 ceph-mgr[74709]: mgr.server handle_report got status from non-daemon mon.compute-1
Dec  1 04:49:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:49:31.603+0000 7f9871923640 -1 mgr.server handle_report got status from non-daemon mon.compute-1
Dec  1 04:49:31 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  1 04:49:31 np0005540825 python3[86715]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:49:31 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:31 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  1 04:49:31 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:31 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec  1 04:49:31 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:31 np0005540825 ceph-mgr[74709]: [progress INFO root] complete: finished ev 9245d59f-af61-4484-ae95-66fd7cc73ffa (Updating mgr deployment (+2 -> 3))
Dec  1 04:49:31 np0005540825 ceph-mgr[74709]: [progress INFO root] Completed event 9245d59f-af61-4484-ae95-66fd7cc73ffa (Updating mgr deployment (+2 -> 3)) in 9 seconds
Dec  1 04:49:31 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec  1 04:49:31 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 2.14 deep-scrub starts
Dec  1 04:49:31 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 2.14 deep-scrub ok
Dec  1 04:49:31 np0005540825 podman[86716]: 2025-12-01 09:49:31.764474273 +0000 UTC m=+0.058872581 container create 39aa840485190dd9f6cceca0bcc1a32ed808553f9bc3aec04b19210575225ce1 (image=quay.io/ceph/ceph:v19, name=naughty_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  1 04:49:31 np0005540825 systemd[1]: Started libpod-conmon-39aa840485190dd9f6cceca0bcc1a32ed808553f9bc3aec04b19210575225ce1.scope.
Dec  1 04:49:31 np0005540825 podman[86716]: 2025-12-01 09:49:31.735983118 +0000 UTC m=+0.030381416 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:49:31 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:49:31 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0917d0673ca29ca2e756b06446fc09597bb41c5d4067fe4d7e1f354fa2400d55/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:31 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0917d0673ca29ca2e756b06446fc09597bb41c5d4067fe4d7e1f354fa2400d55/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:31 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0917d0673ca29ca2e756b06446fc09597bb41c5d4067fe4d7e1f354fa2400d55/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:31 np0005540825 podman[86716]: 2025-12-01 09:49:31.862231002 +0000 UTC m=+0.156629270 container init 39aa840485190dd9f6cceca0bcc1a32ed808553f9bc3aec04b19210575225ce1 (image=quay.io/ceph/ceph:v19, name=naughty_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  1 04:49:31 np0005540825 podman[86716]: 2025-12-01 09:49:31.868752225 +0000 UTC m=+0.163150523 container start 39aa840485190dd9f6cceca0bcc1a32ed808553f9bc3aec04b19210575225ce1 (image=quay.io/ceph/ceph:v19, name=naughty_dewdney, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:49:31 np0005540825 podman[86716]: 2025-12-01 09:49:31.872522855 +0000 UTC m=+0.166921153 container attach 39aa840485190dd9f6cceca0bcc1a32ed808553f9bc3aec04b19210575225ce1 (image=quay.io/ceph/ceph:v19, name=naughty_dewdney, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  1 04:49:32 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/1098136028' entity='client.admin' 
Dec  1 04:49:32 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:32 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:32 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:32 np0005540825 ceph-mgr[74709]: [progress INFO root] update: starting ev 9a9d8bc2-4dc5-48bd-b532-4d560ce5d08f (Updating crash deployment (+1 -> 3))
Dec  1 04:49:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec  1 04:49:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  1 04:49:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  1 04:49:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:49:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:49:32 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Dec  1 04:49:32 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Dec  1 04:49:32 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 04:49:32 np0005540825 ceph-mgr[74709]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec  1 04:49:32 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec  1 04:49:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec  1 04:49:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:32 np0005540825 ceph-mgr[74709]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Dec  1 04:49:32 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Dec  1 04:49:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec  1 04:49:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:32 np0005540825 naughty_dewdney[86731]: Scheduled rgw.rgw update...
Dec  1 04:49:32 np0005540825 naughty_dewdney[86731]: Scheduled ingress.rgw.default update...
Dec  1 04:49:32 np0005540825 systemd[1]: libpod-39aa840485190dd9f6cceca0bcc1a32ed808553f9bc3aec04b19210575225ce1.scope: Deactivated successfully.
Dec  1 04:49:32 np0005540825 podman[86716]: 2025-12-01 09:49:32.330599168 +0000 UTC m=+0.624997486 container died 39aa840485190dd9f6cceca0bcc1a32ed808553f9bc3aec04b19210575225ce1 (image=quay.io/ceph/ceph:v19, name=naughty_dewdney, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  1 04:49:32 np0005540825 systemd[1]: var-lib-containers-storage-overlay-0917d0673ca29ca2e756b06446fc09597bb41c5d4067fe4d7e1f354fa2400d55-merged.mount: Deactivated successfully.
Dec  1 04:49:32 np0005540825 podman[86716]: 2025-12-01 09:49:32.380800088 +0000 UTC m=+0.675198356 container remove 39aa840485190dd9f6cceca0bcc1a32ed808553f9bc3aec04b19210575225ce1 (image=quay.io/ceph/ceph:v19, name=naughty_dewdney, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  1 04:49:32 np0005540825 systemd[1]: libpod-conmon-39aa840485190dd9f6cceca0bcc1a32ed808553f9bc3aec04b19210575225ce1.scope: Deactivated successfully.
Dec  1 04:49:32 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Dec  1 04:49:32 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Dec  1 04:49:32 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v98: 193 pgs: 48 peering, 93 unknown, 52 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:49:32 np0005540825 python3[86843]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_dashboard.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 04:49:33 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:49:33 np0005540825 python3[86914]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764582572.5543895-37290-137229199739333/source dest=/tmp/ceph_dashboard.yml mode=0644 force=True follow=False _original_basename=ceph_monitoring_stack.yml.j2 checksum=2701faaa92cae31b5bbad92984c27e2af7a44b84 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:49:33 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Dec  1 04:49:33 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Dec  1 04:49:33 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:33 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  1 04:49:33 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  1 04:49:33 np0005540825 ceph-mon[74416]: Deploying daemon crash.compute-2 on compute-2
Dec  1 04:49:33 np0005540825 ceph-mon[74416]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec  1 04:49:33 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:33 np0005540825 ceph-mon[74416]: Saving service ingress.rgw.default spec with placement count:2
Dec  1 04:49:33 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:33 np0005540825 python3[86964]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_dashboard.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:49:33 np0005540825 podman[86965]: 2025-12-01 09:49:33.952883818 +0000 UTC m=+0.048623779 container create 9f0ddd7bd396f9d46c809beac4389e6c2a11578c727984ae818b9bc750368612 (image=quay.io/ceph/ceph:v19, name=stoic_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  1 04:49:34 np0005540825 systemd[1]: Started libpod-conmon-9f0ddd7bd396f9d46c809beac4389e6c2a11578c727984ae818b9bc750368612.scope.
Dec  1 04:49:34 np0005540825 podman[86965]: 2025-12-01 09:49:33.926237473 +0000 UTC m=+0.021977434 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:49:34 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:49:34 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41d0fe39356fa873531a9f2b4ac16582c8a82488ff36b8be2651c1f7e2c96c1a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:34 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41d0fe39356fa873531a9f2b4ac16582c8a82488ff36b8be2651c1f7e2c96c1a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:34 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41d0fe39356fa873531a9f2b4ac16582c8a82488ff36b8be2651c1f7e2c96c1a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:34 np0005540825 podman[86965]: 2025-12-01 09:49:34.063015935 +0000 UTC m=+0.158755956 container init 9f0ddd7bd396f9d46c809beac4389e6c2a11578c727984ae818b9bc750368612 (image=quay.io/ceph/ceph:v19, name=stoic_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:49:34 np0005540825 podman[86965]: 2025-12-01 09:49:34.074154101 +0000 UTC m=+0.169894062 container start 9f0ddd7bd396f9d46c809beac4389e6c2a11578c727984ae818b9bc750368612 (image=quay.io/ceph/ceph:v19, name=stoic_shamir, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec  1 04:49:34 np0005540825 podman[86965]: 2025-12-01 09:49:34.078846105 +0000 UTC m=+0.174586116 container attach 9f0ddd7bd396f9d46c809beac4389e6c2a11578c727984ae818b9bc750368612 (image=quay.io/ceph/ceph:v19, name=stoic_shamir, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  1 04:49:34 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.14250 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 04:49:34 np0005540825 ceph-mgr[74709]: [cephadm INFO root] Saving service node-exporter spec with placement *
Dec  1 04:49:34 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Saving service node-exporter spec with placement *
Dec  1 04:49:34 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec  1 04:49:34 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Dec  1 04:49:34 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Dec  1 04:49:34 np0005540825 ceph-mgr[74709]: [progress INFO root] Writing back 11 completed events
Dec  1 04:49:34 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  1 04:49:34 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v99: 193 pgs: 1 active+clean+scrubbing, 192 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:49:34 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  1 04:49:34 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:34 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  1 04:49:34 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:34 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  1 04:49:34 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:34 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Dec  1 04:49:35 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  1 04:49:35 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  1 04:49:35 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  1 04:49:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e35 e35: 2 total, 2 up, 2 in
Dec  1 04:49:35 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:35 np0005540825 ceph-mgr[74709]: [cephadm INFO root] Saving service grafana spec with placement compute-0;count:1
Dec  1 04:49:35 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Saving service grafana spec with placement compute-0;count:1
Dec  1 04:49:35 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e35: 2 total, 2 up, 2 in
Dec  1 04:49:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[5.18( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[6.1a( empty local-lis/les=0/0 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[6.19( empty local-lis/les=0/0 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[5.1a( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[5.1b( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[5.1c( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[6.e( empty local-lis/les=0/0 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[5.e( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[6.d( empty local-lis/les=0/0 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[5.f( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[6.3( empty local-lis/les=0/0 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[6.2( empty local-lis/les=0/0 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[5.1( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[6.5( empty local-lis/les=0/0 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[5.2( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[5.7( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[6.7( empty local-lis/les=0/0 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:35 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[5.4( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[6.8( empty local-lis/les=0/0 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[6.a( empty local-lis/les=0/0 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[5.9( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[6.15( empty local-lis/les=0/0 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[5.16( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[5.15( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[6.17( empty local-lis/les=0/0 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[6.12( empty local-lis/les=0/0 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:35 np0005540825 ceph-mgr[74709]: [progress INFO root] Completed event 49f6faec-1b66-4f5e-ab34-9e1d80bd8a73 (Global Recovery Event) in 6 seconds
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[5.11( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[5.10( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[5.1f( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[6.1c( empty local-lis/les=0/0 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[6.1e( empty local-lis/les=0/0 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[7.1d( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=11.092924118s) [0] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 79.208442688s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[7.1d( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=11.092896461s) [0] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.208442688s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[7.10( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=11.092776299s) [0] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 79.208473206s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[7.13( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=11.093047142s) [0] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 79.208755493s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[7.10( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=11.092761993s) [0] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.208473206s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[7.13( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=11.093011856s) [0] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.208755493s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[7.14( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=11.092761040s) [0] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 79.208747864s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[7.14( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=11.092744827s) [0] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.208747864s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[7.a( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=11.092411041s) [0] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 79.208595276s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[7.a( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=11.092368126s) [0] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.208595276s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[7.b( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=11.092390060s) [0] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 79.208656311s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[7.b( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=11.092364311s) [0] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.208656311s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[7.8( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=11.092256546s) [0] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 79.208610535s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[7.9( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=11.092316628s) [0] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 79.208686829s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[7.9( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=11.092293739s) [0] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.208686829s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[7.8( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=11.092215538s) [0] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.208610535s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[7.e( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=11.092166901s) [0] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 79.208778381s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[7.e( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=11.092150688s) [0] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.208778381s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[7.6( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=11.092119217s) [0] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 79.208831787s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[7.6( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=11.092096329s) [0] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.208831787s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[7.4( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=11.091929436s) [0] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 79.208969116s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[7.4( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=11.091911316s) [0] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.208969116s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[7.3( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=11.091935158s) [0] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 79.209083557s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[7.2( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=11.091790199s) [0] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 79.209014893s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[7.3( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=11.091896057s) [0] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.209083557s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[7.2( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=11.091766357s) [0] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.209014893s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[7.1e( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=11.091648102s) [0] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 79.209121704s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[7.f( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=11.091588974s) [0] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 79.209060669s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[7.1e( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=11.091629982s) [0] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.209121704s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[7.f( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=11.091558456s) [0] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.209060669s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[7.18( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=11.091511726s) [0] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 79.209129333s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[7.18( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=11.091488838s) [0] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.209129333s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[7.1b( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=11.091434479s) [0] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 active pruub 79.209167480s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:35 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 35 pg[7.1b( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=35 pruub=11.091411591s) [0] r=-1 lpr=35 pi=[33,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.209167480s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:35 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:35 np0005540825 ceph-mgr[74709]: [cephadm INFO root] Saving service prometheus spec with placement compute-0;count:1
Dec  1 04:49:35 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Saving service prometheus spec with placement compute-0;count:1
Dec  1 04:49:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Dec  1 04:49:35 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:35 np0005540825 ceph-mgr[74709]: [cephadm INFO root] Saving service alertmanager spec with placement compute-0;count:1
Dec  1 04:49:35 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Saving service alertmanager spec with placement compute-0;count:1
Dec  1 04:49:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Dec  1 04:49:35 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:35 np0005540825 stoic_shamir[86982]: Scheduled node-exporter update...
Dec  1 04:49:35 np0005540825 stoic_shamir[86982]: Scheduled grafana update...
Dec  1 04:49:35 np0005540825 stoic_shamir[86982]: Scheduled prometheus update...
Dec  1 04:49:35 np0005540825 stoic_shamir[86982]: Scheduled alertmanager update...
Dec  1 04:49:35 np0005540825 systemd[1]: libpod-9f0ddd7bd396f9d46c809beac4389e6c2a11578c727984ae818b9bc750368612.scope: Deactivated successfully.
Dec  1 04:49:35 np0005540825 podman[86965]: 2025-12-01 09:49:35.756973265 +0000 UTC m=+1.852713196 container died 9f0ddd7bd396f9d46c809beac4389e6c2a11578c727984ae818b9bc750368612 (image=quay.io/ceph/ceph:v19, name=stoic_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec  1 04:49:35 np0005540825 systemd[1]: var-lib-containers-storage-overlay-41d0fe39356fa873531a9f2b4ac16582c8a82488ff36b8be2651c1f7e2c96c1a-merged.mount: Deactivated successfully.
Dec  1 04:49:35 np0005540825 podman[86965]: 2025-12-01 09:49:35.804494064 +0000 UTC m=+1.900233995 container remove 9f0ddd7bd396f9d46c809beac4389e6c2a11578c727984ae818b9bc750368612 (image=quay.io/ceph/ceph:v19, name=stoic_shamir, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  1 04:49:35 np0005540825 systemd[1]: libpod-conmon-9f0ddd7bd396f9d46c809beac4389e6c2a11578c727984ae818b9bc750368612.scope: Deactivated successfully.
Dec  1 04:49:36 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  1 04:49:36 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:36 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  1 04:49:36 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:36 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec  1 04:49:36 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:36 np0005540825 ceph-mgr[74709]: [progress INFO root] complete: finished ev 9a9d8bc2-4dc5-48bd-b532-4d560ce5d08f (Updating crash deployment (+1 -> 3))
Dec  1 04:49:36 np0005540825 ceph-mgr[74709]: [progress INFO root] Completed event 9a9d8bc2-4dc5-48bd-b532-4d560ce5d08f (Updating crash deployment (+1 -> 3)) in 4 seconds
Dec  1 04:49:36 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec  1 04:49:36 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:36 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 04:49:36 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 04:49:36 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 04:49:36 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 04:49:36 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:49:36 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:49:36 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 04:49:36 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 04:49:36 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:49:36 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:49:36 np0005540825 python3[87090]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:49:36 np0005540825 podman[87093]: 2025-12-01 09:49:36.425090981 +0000 UTC m=+0.046027709 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:49:36 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Dec  1 04:49:36 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Dec  1 04:49:36 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v101: 193 pgs: 1 active+clean+scrubbing, 192 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:49:36 np0005540825 podman[87093]: 2025-12-01 09:49:36.851144276 +0000 UTC m=+0.472080904 container create 213dd43f068f0937c9c65fb459e5e55e94c7cb945e3a336503053c2de12c60c1 (image=quay.io/ceph/ceph:v19, name=eloquent_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  1 04:49:36 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Dec  1 04:49:36 np0005540825 systemd[1]: Started libpod-conmon-213dd43f068f0937c9c65fb459e5e55e94c7cb945e3a336503053c2de12c60c1.scope.
Dec  1 04:49:36 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:49:36 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2755883d868165135fc2a806323eae4ee2f0e3fd0feea90a4c30200946e0a0dd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:36 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2755883d868165135fc2a806323eae4ee2f0e3fd0feea90a4c30200946e0a0dd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:36 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2755883d868165135fc2a806323eae4ee2f0e3fd0feea90a4c30200946e0a0dd/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:36 np0005540825 podman[87093]: 2025-12-01 09:49:36.970109278 +0000 UTC m=+0.591045936 container init 213dd43f068f0937c9c65fb459e5e55e94c7cb945e3a336503053c2de12c60c1 (image=quay.io/ceph/ceph:v19, name=eloquent_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:49:36 np0005540825 podman[87093]: 2025-12-01 09:49:36.977917544 +0000 UTC m=+0.598854192 container start 213dd43f068f0937c9c65fb459e5e55e94c7cb945e3a336503053c2de12c60c1 (image=quay.io/ceph/ceph:v19, name=eloquent_mclaren, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:49:36 np0005540825 podman[87093]: 2025-12-01 09:49:36.981660593 +0000 UTC m=+0.602597241 container attach 213dd43f068f0937c9c65fb459e5e55e94c7cb945e3a336503053c2de12c60c1 (image=quay.io/ceph/ceph:v19, name=eloquent_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  1 04:49:37 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e36 e36: 2 total, 2 up, 2 in
Dec  1 04:49:37 np0005540825 ceph-mon[74416]: Saving service node-exporter spec with placement *
Dec  1 04:49:37 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:37 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:37 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  1 04:49:37 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  1 04:49:37 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  1 04:49:37 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  1 04:49:37 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:37 np0005540825 ceph-mon[74416]: Saving service grafana spec with placement compute-0;count:1
Dec  1 04:49:37 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:37 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:37 np0005540825 ceph-mon[74416]: Saving service prometheus spec with placement compute-0;count:1
Dec  1 04:49:37 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:37 np0005540825 ceph-mon[74416]: Saving service alertmanager spec with placement compute-0;count:1
Dec  1 04:49:37 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:37 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:37 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:37 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:37 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:37 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 04:49:37 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 04:49:37 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 36 pg[6.1e( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:37 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e36: 2 total, 2 up, 2 in
Dec  1 04:49:37 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 36 pg[6.12( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:37 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 36 pg[6.17( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:37 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 36 pg[5.10( empty local-lis/les=35/36 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:37 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 36 pg[5.15( empty local-lis/les=35/36 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:37 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 36 pg[5.16( empty local-lis/les=35/36 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:37 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 36 pg[6.15( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:37 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 36 pg[5.11( empty local-lis/les=35/36 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:37 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 36 pg[6.a( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:37 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 36 pg[5.9( empty local-lis/les=35/36 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:37 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 36 pg[5.1f( empty local-lis/les=35/36 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:37 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 36 pg[6.8( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:37 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 36 pg[6.1c( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:37 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 36 pg[5.4( empty local-lis/les=35/36 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:37 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 36 pg[6.7( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:37 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 36 pg[5.7( empty local-lis/les=35/36 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:37 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 36 pg[5.2( empty local-lis/les=35/36 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:37 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 36 pg[6.5( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:37 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 36 pg[5.1( empty local-lis/les=35/36 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:37 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 36 pg[6.2( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:37 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 36 pg[6.3( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:37 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 36 pg[5.f( empty local-lis/les=35/36 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:37 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 36 pg[6.d( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:37 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 36 pg[5.e( empty local-lis/les=35/36 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:37 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 36 pg[6.e( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:37 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 36 pg[5.1c( empty local-lis/les=35/36 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:37 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 36 pg[5.1b( empty local-lis/les=35/36 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:37 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 36 pg[6.19( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:37 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 36 pg[5.1a( empty local-lis/les=35/36 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:37 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 36 pg[6.1a( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:37 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 36 pg[5.18( empty local-lis/les=35/36 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=35) [1] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:49:37 np0005540825 podman[87166]: 2025-12-01 09:49:37.270247417 +0000 UTC m=+0.079434755 container create 9f28c67dc9fd1a1375e3f7f018cfc5d26026af30ba7b887d26c351bd75217658 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:49:37 np0005540825 podman[87166]: 2025-12-01 09:49:37.238575959 +0000 UTC m=+0.047763377 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:49:37 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.kdtkls started
Dec  1 04:49:37 np0005540825 systemd[1]: Started libpod-conmon-9f28c67dc9fd1a1375e3f7f018cfc5d26026af30ba7b887d26c351bd75217658.scope.
Dec  1 04:49:37 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from mgr.compute-2.kdtkls 192.168.122.102:0/901989185; not ready for session (expect reconnect)
Dec  1 04:49:37 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:49:37 np0005540825 podman[87166]: 2025-12-01 09:49:37.373121832 +0000 UTC m=+0.182309230 container init 9f28c67dc9fd1a1375e3f7f018cfc5d26026af30ba7b887d26c351bd75217658 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_blackburn, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  1 04:49:37 np0005540825 podman[87166]: 2025-12-01 09:49:37.382411869 +0000 UTC m=+0.191599177 container start 9f28c67dc9fd1a1375e3f7f018cfc5d26026af30ba7b887d26c351bd75217658 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec  1 04:49:37 np0005540825 naughty_blackburn[87183]: 167 167
Dec  1 04:49:37 np0005540825 podman[87166]: 2025-12-01 09:49:37.386181078 +0000 UTC m=+0.195368476 container attach 9f28c67dc9fd1a1375e3f7f018cfc5d26026af30ba7b887d26c351bd75217658 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_blackburn, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 04:49:37 np0005540825 systemd[1]: libpod-9f28c67dc9fd1a1375e3f7f018cfc5d26026af30ba7b887d26c351bd75217658.scope: Deactivated successfully.
Dec  1 04:49:37 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/server_port}] v 0)
Dec  1 04:49:37 np0005540825 podman[87188]: 2025-12-01 09:49:37.458599357 +0000 UTC m=+0.048413484 container died 9f28c67dc9fd1a1375e3f7f018cfc5d26026af30ba7b887d26c351bd75217658 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_blackburn, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  1 04:49:37 np0005540825 systemd[1]: var-lib-containers-storage-overlay-02b6c3360b2a38c2b920627dfc8c37ddab4805c3fefeb798303ed6f974e06832-merged.mount: Deactivated successfully.
Dec  1 04:49:37 np0005540825 podman[87188]: 2025-12-01 09:49:37.49760752 +0000 UTC m=+0.087421567 container remove 9f28c67dc9fd1a1375e3f7f018cfc5d26026af30ba7b887d26c351bd75217658 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  1 04:49:37 np0005540825 systemd[1]: libpod-conmon-9f28c67dc9fd1a1375e3f7f018cfc5d26026af30ba7b887d26c351bd75217658.scope: Deactivated successfully.
Dec  1 04:49:37 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Dec  1 04:49:37 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Dec  1 04:49:37 np0005540825 podman[87212]: 2025-12-01 09:49:37.769251135 +0000 UTC m=+0.045540217 container create 0774688e4a96bf2960bad7007cc90d7004fec3963316fd0c0c23b3597a69e8c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_nobel, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  1 04:49:37 np0005540825 systemd[1]: Started libpod-conmon-0774688e4a96bf2960bad7007cc90d7004fec3963316fd0c0c23b3597a69e8c7.scope.
Dec  1 04:49:37 np0005540825 podman[87212]: 2025-12-01 09:49:37.749122962 +0000 UTC m=+0.025412054 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:49:37 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3669410899' entity='client.admin' 
Dec  1 04:49:37 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:49:37 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ee8ce70659d6ff4edd67261795048f1cdc1a7549c3a3aad85e7242b0197ab4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:37 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ee8ce70659d6ff4edd67261795048f1cdc1a7549c3a3aad85e7242b0197ab4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:37 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ee8ce70659d6ff4edd67261795048f1cdc1a7549c3a3aad85e7242b0197ab4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:37 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ee8ce70659d6ff4edd67261795048f1cdc1a7549c3a3aad85e7242b0197ab4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:37 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ee8ce70659d6ff4edd67261795048f1cdc1a7549c3a3aad85e7242b0197ab4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:37 np0005540825 systemd[1]: libpod-213dd43f068f0937c9c65fb459e5e55e94c7cb945e3a336503053c2de12c60c1.scope: Deactivated successfully.
Dec  1 04:49:37 np0005540825 podman[87212]: 2025-12-01 09:49:37.884613111 +0000 UTC m=+0.160902273 container init 0774688e4a96bf2960bad7007cc90d7004fec3963316fd0c0c23b3597a69e8c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_nobel, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:49:37 np0005540825 podman[87093]: 2025-12-01 09:49:37.885521375 +0000 UTC m=+1.506458033 container died 213dd43f068f0937c9c65fb459e5e55e94c7cb945e3a336503053c2de12c60c1 (image=quay.io/ceph/ceph:v19, name=eloquent_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec  1 04:49:37 np0005540825 podman[87212]: 2025-12-01 09:49:37.898636752 +0000 UTC m=+0.174925834 container start 0774688e4a96bf2960bad7007cc90d7004fec3963316fd0c0c23b3597a69e8c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  1 04:49:37 np0005540825 podman[87212]: 2025-12-01 09:49:37.902405792 +0000 UTC m=+0.178694954 container attach 0774688e4a96bf2960bad7007cc90d7004fec3963316fd0c0c23b3597a69e8c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  1 04:49:37 np0005540825 systemd[1]: var-lib-containers-storage-overlay-2755883d868165135fc2a806323eae4ee2f0e3fd0feea90a4c30200946e0a0dd-merged.mount: Deactivated successfully.
Dec  1 04:49:37 np0005540825 podman[87093]: 2025-12-01 09:49:37.93667615 +0000 UTC m=+1.557612788 container remove 213dd43f068f0937c9c65fb459e5e55e94c7cb945e3a336503053c2de12c60c1 (image=quay.io/ceph/ceph:v19, name=eloquent_mclaren, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  1 04:49:37 np0005540825 systemd[1]: libpod-conmon-213dd43f068f0937c9c65fb459e5e55e94c7cb945e3a336503053c2de12c60c1.scope: Deactivated successfully.
Dec  1 04:49:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:49:38 np0005540825 youthful_nobel[87230]: --> passed data devices: 0 physical, 1 LVM
Dec  1 04:49:38 np0005540825 youthful_nobel[87230]: --> All data devices are unavailable
Dec  1 04:49:38 np0005540825 python3[87276]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl_server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:49:38 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from mgr.compute-1.ymizfm 192.168.122.101:0/1767876135; not ready for session (expect reconnect)
Dec  1 04:49:38 np0005540825 systemd[1]: libpod-0774688e4a96bf2960bad7007cc90d7004fec3963316fd0c0c23b3597a69e8c7.scope: Deactivated successfully.
Dec  1 04:49:38 np0005540825 podman[87212]: 2025-12-01 09:49:38.300548338 +0000 UTC m=+0.576837500 container died 0774688e4a96bf2960bad7007cc90d7004fec3963316fd0c0c23b3597a69e8c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:49:38 np0005540825 systemd[1]: var-lib-containers-storage-overlay-e0ee8ce70659d6ff4edd67261795048f1cdc1a7549c3a3aad85e7242b0197ab4-merged.mount: Deactivated successfully.
Dec  1 04:49:38 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from mgr.compute-2.kdtkls 192.168.122.102:0/901989185; not ready for session (expect reconnect)
Dec  1 04:49:38 np0005540825 podman[87212]: 2025-12-01 09:49:38.355920525 +0000 UTC m=+0.632209597 container remove 0774688e4a96bf2960bad7007cc90d7004fec3963316fd0c0c23b3597a69e8c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:49:38 np0005540825 podman[87284]: 2025-12-01 09:49:38.364123802 +0000 UTC m=+0.076714483 container create b70b965fab3d40b551512dcc434fe472b69fe8d26bc4cb5a0346da53655abcdb (image=quay.io/ceph/ceph:v19, name=clever_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  1 04:49:38 np0005540825 systemd[1]: Started libpod-conmon-b70b965fab3d40b551512dcc434fe472b69fe8d26bc4cb5a0346da53655abcdb.scope.
Dec  1 04:49:38 np0005540825 systemd[1]: libpod-conmon-0774688e4a96bf2960bad7007cc90d7004fec3963316fd0c0c23b3597a69e8c7.scope: Deactivated successfully.
Dec  1 04:49:38 np0005540825 podman[87284]: 2025-12-01 09:49:38.331597711 +0000 UTC m=+0.044188372 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:49:38 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:49:38 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44e6477080c61b7b874485b9f73ea8b5cb462f70bbd11921871972a27f8a7681/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:38 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44e6477080c61b7b874485b9f73ea8b5cb462f70bbd11921871972a27f8a7681/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:38 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44e6477080c61b7b874485b9f73ea8b5cb462f70bbd11921871972a27f8a7681/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:38 np0005540825 podman[87284]: 2025-12-01 09:49:38.461659986 +0000 UTC m=+0.174250737 container init b70b965fab3d40b551512dcc434fe472b69fe8d26bc4cb5a0346da53655abcdb (image=quay.io/ceph/ceph:v19, name=clever_goodall, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec  1 04:49:38 np0005540825 podman[87284]: 2025-12-01 09:49:38.468961629 +0000 UTC m=+0.181552330 container start b70b965fab3d40b551512dcc434fe472b69fe8d26bc4cb5a0346da53655abcdb (image=quay.io/ceph/ceph:v19, name=clever_goodall, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:49:38 np0005540825 podman[87284]: 2025-12-01 09:49:38.472590655 +0000 UTC m=+0.185181356 container attach b70b965fab3d40b551512dcc434fe472b69fe8d26bc4cb5a0346da53655abcdb (image=quay.io/ceph/ceph:v19, name=clever_goodall, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  1 04:49:38 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Dec  1 04:49:38 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Dec  1 04:49:38 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/3669410899' entity='client.admin' 
Dec  1 04:49:38 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.ymizfm started
Dec  1 04:49:38 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.fospow(active, since 2m), standbys: compute-2.kdtkls
Dec  1 04:49:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.kdtkls", "id": "compute-2.kdtkls"} v 0)
Dec  1 04:49:38 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mgr metadata", "who": "compute-2.kdtkls", "id": "compute-2.kdtkls"}]: dispatch
Dec  1 04:49:38 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v103: 193 pgs: 1 active+clean+scrubbing, 192 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:49:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl_server_port}] v 0)
Dec  1 04:49:38 np0005540825 podman[87422]: 2025-12-01 09:49:38.85898699 +0000 UTC m=+0.033386855 container create f0a1561c19565c5c7ef8d0d2cffb3d70bdc929dbf35dd72c53baa560c0b14919 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_lehmann, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  1 04:49:38 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/819597' entity='client.admin' 
Dec  1 04:49:38 np0005540825 podman[87284]: 2025-12-01 09:49:38.885451721 +0000 UTC m=+0.598042372 container died b70b965fab3d40b551512dcc434fe472b69fe8d26bc4cb5a0346da53655abcdb (image=quay.io/ceph/ceph:v19, name=clever_goodall, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:49:38 np0005540825 systemd[1]: Started libpod-conmon-f0a1561c19565c5c7ef8d0d2cffb3d70bdc929dbf35dd72c53baa560c0b14919.scope.
Dec  1 04:49:38 np0005540825 systemd[1]: libpod-b70b965fab3d40b551512dcc434fe472b69fe8d26bc4cb5a0346da53655abcdb.scope: Deactivated successfully.
Dec  1 04:49:38 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:49:38 np0005540825 systemd[1]: var-lib-containers-storage-overlay-44e6477080c61b7b874485b9f73ea8b5cb462f70bbd11921871972a27f8a7681-merged.mount: Deactivated successfully.
Dec  1 04:49:38 np0005540825 podman[87422]: 2025-12-01 09:49:38.844956088 +0000 UTC m=+0.019355973 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:49:38 np0005540825 podman[87284]: 2025-12-01 09:49:38.951700686 +0000 UTC m=+0.664291337 container remove b70b965fab3d40b551512dcc434fe472b69fe8d26bc4cb5a0346da53655abcdb (image=quay.io/ceph/ceph:v19, name=clever_goodall, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  1 04:49:38 np0005540825 podman[87422]: 2025-12-01 09:49:38.963633162 +0000 UTC m=+0.138033077 container init f0a1561c19565c5c7ef8d0d2cffb3d70bdc929dbf35dd72c53baa560c0b14919 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_lehmann, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:49:38 np0005540825 podman[87422]: 2025-12-01 09:49:38.969406145 +0000 UTC m=+0.143806030 container start f0a1561c19565c5c7ef8d0d2cffb3d70bdc929dbf35dd72c53baa560c0b14919 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_lehmann, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec  1 04:49:38 np0005540825 podman[87422]: 2025-12-01 09:49:38.972749323 +0000 UTC m=+0.147149238 container attach f0a1561c19565c5c7ef8d0d2cffb3d70bdc929dbf35dd72c53baa560c0b14919 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_lehmann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:49:38 np0005540825 serene_lehmann[87441]: 167 167
Dec  1 04:49:38 np0005540825 systemd[1]: libpod-f0a1561c19565c5c7ef8d0d2cffb3d70bdc929dbf35dd72c53baa560c0b14919.scope: Deactivated successfully.
Dec  1 04:49:38 np0005540825 conmon[87441]: conmon f0a1561c19565c5c7ef8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f0a1561c19565c5c7ef8d0d2cffb3d70bdc929dbf35dd72c53baa560c0b14919.scope/container/memory.events
Dec  1 04:49:38 np0005540825 podman[87422]: 2025-12-01 09:49:38.978393893 +0000 UTC m=+0.152793758 container died f0a1561c19565c5c7ef8d0d2cffb3d70bdc929dbf35dd72c53baa560c0b14919 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_lehmann, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:49:38 np0005540825 systemd[1]: libpod-conmon-b70b965fab3d40b551512dcc434fe472b69fe8d26bc4cb5a0346da53655abcdb.scope: Deactivated successfully.
Dec  1 04:49:39 np0005540825 systemd[1]: var-lib-containers-storage-overlay-11fdb821061b73645aae001e08076c7466c9b43f5dd60b778db60cb8e14ae2f2-merged.mount: Deactivated successfully.
Dec  1 04:49:39 np0005540825 podman[87422]: 2025-12-01 09:49:39.019607655 +0000 UTC m=+0.194007560 container remove f0a1561c19565c5c7ef8d0d2cffb3d70bdc929dbf35dd72c53baa560c0b14919 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_lehmann, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  1 04:49:39 np0005540825 systemd[1]: libpod-conmon-f0a1561c19565c5c7ef8d0d2cffb3d70bdc929dbf35dd72c53baa560c0b14919.scope: Deactivated successfully.
Dec  1 04:49:39 np0005540825 podman[87499]: 2025-12-01 09:49:39.191349504 +0000 UTC m=+0.050382376 container create e7ea63ca6c3b8e9a56ff6430ead19b6e37b36aefaa70258764cd89127a38ac5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:49:39 np0005540825 systemd[1]: Started libpod-conmon-e7ea63ca6c3b8e9a56ff6430ead19b6e37b36aefaa70258764cd89127a38ac5e.scope.
Dec  1 04:49:39 np0005540825 podman[87499]: 2025-12-01 09:49:39.172658159 +0000 UTC m=+0.031691081 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:49:39 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:49:39 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfb38920dd4586a6bbe03dc1202e2a77340e9d8c499c584fbf5b9cf434cef549/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:39 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfb38920dd4586a6bbe03dc1202e2a77340e9d8c499c584fbf5b9cf434cef549/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:39 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfb38920dd4586a6bbe03dc1202e2a77340e9d8c499c584fbf5b9cf434cef549/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:39 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfb38920dd4586a6bbe03dc1202e2a77340e9d8c499c584fbf5b9cf434cef549/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:39 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from mgr.compute-1.ymizfm 192.168.122.101:0/1767876135; not ready for session (expect reconnect)
Dec  1 04:49:39 np0005540825 podman[87499]: 2025-12-01 09:49:39.317763052 +0000 UTC m=+0.176796024 container init e7ea63ca6c3b8e9a56ff6430ead19b6e37b36aefaa70258764cd89127a38ac5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_dijkstra, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:49:39 np0005540825 podman[87499]: 2025-12-01 09:49:39.332259986 +0000 UTC m=+0.191292868 container start e7ea63ca6c3b8e9a56ff6430ead19b6e37b36aefaa70258764cd89127a38ac5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  1 04:49:39 np0005540825 python3[87507]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl false _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:49:39 np0005540825 podman[87499]: 2025-12-01 09:49:39.335469311 +0000 UTC m=+0.194502253 container attach e7ea63ca6c3b8e9a56ff6430ead19b6e37b36aefaa70258764cd89127a38ac5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_dijkstra, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:49:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "0eea832e-1517-4443-89c1-2611993976f8"} v 0)
Dec  1 04:49:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0eea832e-1517-4443-89c1-2611993976f8"}]: dispatch
Dec  1 04:49:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Dec  1 04:49:39 np0005540825 podman[87523]: 2025-12-01 09:49:39.436398935 +0000 UTC m=+0.076341254 container create 0bf9da0b27f95d857d825f00512c52ef30514572674b8398ad08c6c11cdfaf4e (image=quay.io/ceph/ceph:v19, name=sharp_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec  1 04:49:39 np0005540825 systemd[1]: Started libpod-conmon-0bf9da0b27f95d857d825f00512c52ef30514572674b8398ad08c6c11cdfaf4e.scope.
Dec  1 04:49:39 np0005540825 podman[87523]: 2025-12-01 09:49:39.405217959 +0000 UTC m=+0.045160328 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:49:39 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Dec  1 04:49:39 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:49:39 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c04f8639b1e49712d987784d894a532644496d524b2eec9ad8ac37e816157197/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:39 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c04f8639b1e49712d987784d894a532644496d524b2eec9ad8ac37e816157197/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:39 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c04f8639b1e49712d987784d894a532644496d524b2eec9ad8ac37e816157197/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:39 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Dec  1 04:49:39 np0005540825 podman[87523]: 2025-12-01 09:49:39.53817591 +0000 UTC m=+0.178118219 container init 0bf9da0b27f95d857d825f00512c52ef30514572674b8398ad08c6c11cdfaf4e (image=quay.io/ceph/ceph:v19, name=sharp_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default)
Dec  1 04:49:39 np0005540825 podman[87523]: 2025-12-01 09:49:39.545390782 +0000 UTC m=+0.185333061 container start 0bf9da0b27f95d857d825f00512c52ef30514572674b8398ad08c6c11cdfaf4e (image=quay.io/ceph/ceph:v19, name=sharp_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  1 04:49:39 np0005540825 podman[87523]: 2025-12-01 09:49:39.548782051 +0000 UTC m=+0.188724330 container attach 0bf9da0b27f95d857d825f00512c52ef30514572674b8398ad08c6c11cdfaf4e (image=quay.io/ceph/ceph:v19, name=sharp_tu, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  1 04:49:39 np0005540825 sad_dijkstra[87518]: {
Dec  1 04:49:39 np0005540825 sad_dijkstra[87518]:    "1": [
Dec  1 04:49:39 np0005540825 sad_dijkstra[87518]:        {
Dec  1 04:49:39 np0005540825 sad_dijkstra[87518]:            "devices": [
Dec  1 04:49:39 np0005540825 sad_dijkstra[87518]:                "/dev/loop3"
Dec  1 04:49:39 np0005540825 sad_dijkstra[87518]:            ],
Dec  1 04:49:39 np0005540825 sad_dijkstra[87518]:            "lv_name": "ceph_lv0",
Dec  1 04:49:39 np0005540825 sad_dijkstra[87518]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 04:49:39 np0005540825 sad_dijkstra[87518]:            "lv_size": "21470642176",
Dec  1 04:49:39 np0005540825 sad_dijkstra[87518]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 04:49:39 np0005540825 sad_dijkstra[87518]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 04:49:39 np0005540825 sad_dijkstra[87518]:            "name": "ceph_lv0",
Dec  1 04:49:39 np0005540825 sad_dijkstra[87518]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 04:49:39 np0005540825 sad_dijkstra[87518]:            "tags": {
Dec  1 04:49:39 np0005540825 sad_dijkstra[87518]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 04:49:39 np0005540825 sad_dijkstra[87518]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 04:49:39 np0005540825 sad_dijkstra[87518]:                "ceph.cephx_lockbox_secret": "",
Dec  1 04:49:39 np0005540825 sad_dijkstra[87518]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 04:49:39 np0005540825 sad_dijkstra[87518]:                "ceph.cluster_name": "ceph",
Dec  1 04:49:39 np0005540825 sad_dijkstra[87518]:                "ceph.crush_device_class": "",
Dec  1 04:49:39 np0005540825 sad_dijkstra[87518]:                "ceph.encrypted": "0",
Dec  1 04:49:39 np0005540825 sad_dijkstra[87518]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 04:49:39 np0005540825 sad_dijkstra[87518]:                "ceph.osd_id": "1",
Dec  1 04:49:39 np0005540825 sad_dijkstra[87518]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 04:49:39 np0005540825 sad_dijkstra[87518]:                "ceph.type": "block",
Dec  1 04:49:39 np0005540825 sad_dijkstra[87518]:                "ceph.vdo": "0",
Dec  1 04:49:39 np0005540825 sad_dijkstra[87518]:                "ceph.with_tpm": "0"
Dec  1 04:49:39 np0005540825 sad_dijkstra[87518]:            },
Dec  1 04:49:39 np0005540825 sad_dijkstra[87518]:            "type": "block",
Dec  1 04:49:39 np0005540825 sad_dijkstra[87518]:            "vg_name": "ceph_vg0"
Dec  1 04:49:39 np0005540825 sad_dijkstra[87518]:        }
Dec  1 04:49:39 np0005540825 sad_dijkstra[87518]:    ]
Dec  1 04:49:39 np0005540825 sad_dijkstra[87518]: }
Dec  1 04:49:39 np0005540825 systemd[1]: libpod-e7ea63ca6c3b8e9a56ff6430ead19b6e37b36aefaa70258764cd89127a38ac5e.scope: Deactivated successfully.
Dec  1 04:49:39 np0005540825 podman[87499]: 2025-12-01 09:49:39.685426541 +0000 UTC m=+0.544459423 container died e7ea63ca6c3b8e9a56ff6430ead19b6e37b36aefaa70258764cd89127a38ac5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  1 04:49:39 np0005540825 systemd[1]: var-lib-containers-storage-overlay-bfb38920dd4586a6bbe03dc1202e2a77340e9d8c499c584fbf5b9cf434cef549-merged.mount: Deactivated successfully.
Dec  1 04:49:39 np0005540825 podman[87499]: 2025-12-01 09:49:39.739236396 +0000 UTC m=+0.598269278 container remove e7ea63ca6c3b8e9a56ff6430ead19b6e37b36aefaa70258764cd89127a38ac5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:49:39 np0005540825 systemd[1]: libpod-conmon-e7ea63ca6c3b8e9a56ff6430ead19b6e37b36aefaa70258764cd89127a38ac5e.scope: Deactivated successfully.
Dec  1 04:49:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl}] v 0)
Dec  1 04:49:40 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from mgr.compute-1.ymizfm 192.168.122.101:0/1767876135; not ready for session (expect reconnect)
Dec  1 04:49:40 np0005540825 podman[87666]: 2025-12-01 09:49:40.373042033 +0000 UTC m=+0.073784315 container create 20cf6930133f67689f2a0435b4044927842d2544f0957b44d138f8b5ae529505 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_ganguly, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  1 04:49:40 np0005540825 podman[87666]: 2025-12-01 09:49:40.322423223 +0000 UTC m=+0.023165505 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:49:40 np0005540825 systemd[1]: Started libpod-conmon-20cf6930133f67689f2a0435b4044927842d2544f0957b44d138f8b5ae529505.scope.
Dec  1 04:49:40 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:49:40 np0005540825 podman[87666]: 2025-12-01 09:49:40.478295941 +0000 UTC m=+0.179038243 container init 20cf6930133f67689f2a0435b4044927842d2544f0957b44d138f8b5ae529505 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:49:40 np0005540825 podman[87666]: 2025-12-01 09:49:40.484806514 +0000 UTC m=+0.185548776 container start 20cf6930133f67689f2a0435b4044927842d2544f0957b44d138f8b5ae529505 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec  1 04:49:40 np0005540825 podman[87666]: 2025-12-01 09:49:40.488956994 +0000 UTC m=+0.189699286 container attach 20cf6930133f67689f2a0435b4044927842d2544f0957b44d138f8b5ae529505 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_ganguly, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Dec  1 04:49:40 np0005540825 systemd[1]: libpod-20cf6930133f67689f2a0435b4044927842d2544f0957b44d138f8b5ae529505.scope: Deactivated successfully.
Dec  1 04:49:40 np0005540825 intelligent_ganguly[87680]: 167 167
Dec  1 04:49:40 np0005540825 conmon[87680]: conmon 20cf6930133f67689f2a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-20cf6930133f67689f2a0435b4044927842d2544f0957b44d138f8b5ae529505.scope/container/memory.events
Dec  1 04:49:40 np0005540825 podman[87666]: 2025-12-01 09:49:40.494252494 +0000 UTC m=+0.194994776 container died 20cf6930133f67689f2a0435b4044927842d2544f0957b44d138f8b5ae529505 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec  1 04:49:40 np0005540825 systemd[1]: var-lib-containers-storage-overlay-b7550db60b98f9979d22985561992a80d0b14350a8f4061945419fc50acec066-merged.mount: Deactivated successfully.
Dec  1 04:49:40 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Dec  1 04:49:40 np0005540825 podman[87666]: 2025-12-01 09:49:40.537242053 +0000 UTC m=+0.237984305 container remove 20cf6930133f67689f2a0435b4044927842d2544f0957b44d138f8b5ae529505 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_ganguly, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:49:40 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Dec  1 04:49:40 np0005540825 systemd[1]: libpod-conmon-20cf6930133f67689f2a0435b4044927842d2544f0957b44d138f8b5ae529505.scope: Deactivated successfully.
Dec  1 04:49:40 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0eea832e-1517-4443-89c1-2611993976f8"}]': finished
Dec  1 04:49:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e37 e37: 3 total, 2 up, 3 in
Dec  1 04:49:40 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 2 up, 3 in
Dec  1 04:49:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  1 04:49:40 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  1 04:49:40 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  1 04:49:40 np0005540825 ceph-mgr[74709]: [progress INFO root] Writing back 13 completed events
Dec  1 04:49:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  1 04:49:40 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v105: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:49:40 np0005540825 podman[87708]: 2025-12-01 09:49:40.775063362 +0000 UTC m=+0.074652778 container create a35d47727a1e1585461574daa20f1cb83b8d67f05f26ceef044ed373a8187ece (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 04:49:40 np0005540825 systemd[1]: Started libpod-conmon-a35d47727a1e1585461574daa20f1cb83b8d67f05f26ceef044ed373a8187ece.scope.
Dec  1 04:49:40 np0005540825 podman[87708]: 2025-12-01 09:49:40.745207711 +0000 UTC m=+0.044797197 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:49:40 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:49:40 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7085c29d5889115c7dd8b922bed214be170e0c3f072a49b222d8db6f57247166/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:40 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7085c29d5889115c7dd8b922bed214be170e0c3f072a49b222d8db6f57247166/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:40 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7085c29d5889115c7dd8b922bed214be170e0c3f072a49b222d8db6f57247166/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:40 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7085c29d5889115c7dd8b922bed214be170e0c3f072a49b222d8db6f57247166/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:40 np0005540825 podman[87708]: 2025-12-01 09:49:40.877947027 +0000 UTC m=+0.177536503 container init a35d47727a1e1585461574daa20f1cb83b8d67f05f26ceef044ed373a8187ece (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_lamarr, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:49:40 np0005540825 podman[87708]: 2025-12-01 09:49:40.887159301 +0000 UTC m=+0.186748727 container start a35d47727a1e1585461574daa20f1cb83b8d67f05f26ceef044ed373a8187ece (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  1 04:49:40 np0005540825 podman[87708]: 2025-12-01 09:49:40.890930641 +0000 UTC m=+0.190520117 container attach a35d47727a1e1585461574daa20f1cb83b8d67f05f26ceef044ed373a8187ece (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True)
Dec  1 04:49:41 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:49:41 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:49:41 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:49:41 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:49:41 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:49:41 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:49:41 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from mgr.compute-1.ymizfm 192.168.122.101:0/1767876135; not ready for session (expect reconnect)
Dec  1 04:49:41 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/819597' entity='client.admin' 
Dec  1 04:49:41 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.102:0/1836222916' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0eea832e-1517-4443-89c1-2611993976f8"}]: dispatch
Dec  1 04:49:41 np0005540825 ceph-mon[74416]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0eea832e-1517-4443-89c1-2611993976f8"}]: dispatch
Dec  1 04:49:41 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/88022779' entity='client.admin' 
Dec  1 04:49:41 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.fospow(active, since 2m), standbys: compute-2.kdtkls, compute-1.ymizfm
Dec  1 04:49:41 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.ymizfm", "id": "compute-1.ymizfm"} v 0)
Dec  1 04:49:41 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mgr metadata", "who": "compute-1.ymizfm", "id": "compute-1.ymizfm"}]: dispatch
Dec  1 04:49:41 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 3.e deep-scrub starts
Dec  1 04:49:41 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:41 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 3.e deep-scrub ok
Dec  1 04:49:41 np0005540825 systemd[1]: libpod-0bf9da0b27f95d857d825f00512c52ef30514572674b8398ad08c6c11cdfaf4e.scope: Deactivated successfully.
Dec  1 04:49:41 np0005540825 podman[87523]: 2025-12-01 09:49:41.585251362 +0000 UTC m=+2.225193641 container died 0bf9da0b27f95d857d825f00512c52ef30514572674b8398ad08c6c11cdfaf4e (image=quay.io/ceph/ceph:v19, name=sharp_tu, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  1 04:49:41 np0005540825 systemd[1]: var-lib-containers-storage-overlay-c04f8639b1e49712d987784d894a532644496d524b2eec9ad8ac37e816157197-merged.mount: Deactivated successfully.
Dec  1 04:49:41 np0005540825 podman[87523]: 2025-12-01 09:49:41.623424093 +0000 UTC m=+2.263366382 container remove 0bf9da0b27f95d857d825f00512c52ef30514572674b8398ad08c6c11cdfaf4e (image=quay.io/ceph/ceph:v19, name=sharp_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:49:41 np0005540825 systemd[1]: libpod-conmon-0bf9da0b27f95d857d825f00512c52ef30514572674b8398ad08c6c11cdfaf4e.scope: Deactivated successfully.
Dec  1 04:49:41 np0005540825 lvm[87808]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 04:49:41 np0005540825 lvm[87808]: VG ceph_vg0 finished
Dec  1 04:49:41 np0005540825 friendly_lamarr[87724]: {}
Dec  1 04:49:41 np0005540825 podman[87708]: 2025-12-01 09:49:41.711669481 +0000 UTC m=+1.011258867 container died a35d47727a1e1585461574daa20f1cb83b8d67f05f26ceef044ed373a8187ece (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_lamarr, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:49:41 np0005540825 systemd[1]: libpod-a35d47727a1e1585461574daa20f1cb83b8d67f05f26ceef044ed373a8187ece.scope: Deactivated successfully.
Dec  1 04:49:41 np0005540825 systemd[1]: libpod-a35d47727a1e1585461574daa20f1cb83b8d67f05f26ceef044ed373a8187ece.scope: Consumed 1.275s CPU time.
Dec  1 04:49:41 np0005540825 systemd[1]: var-lib-containers-storage-overlay-7085c29d5889115c7dd8b922bed214be170e0c3f072a49b222d8db6f57247166-merged.mount: Deactivated successfully.
Dec  1 04:49:41 np0005540825 podman[87708]: 2025-12-01 09:49:41.818882511 +0000 UTC m=+1.118471927 container remove a35d47727a1e1585461574daa20f1cb83b8d67f05f26ceef044ed373a8187ece (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:49:41 np0005540825 systemd[1]: libpod-conmon-a35d47727a1e1585461574daa20f1cb83b8d67f05f26ceef044ed373a8187ece.scope: Deactivated successfully.
Dec  1 04:49:41 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:49:41 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:41 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:49:42 np0005540825 python3[87851]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:49:42 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:42 np0005540825 ceph-mon[74416]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0eea832e-1517-4443-89c1-2611993976f8"}]': finished
Dec  1 04:49:42 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/88022779' entity='client.admin' 
Dec  1 04:49:42 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:42 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:42 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' 
Dec  1 04:49:42 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 3.f scrub starts
Dec  1 04:49:42 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 3.f scrub ok
Dec  1 04:49:42 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v106: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:49:42 np0005540825 python3[87890]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-0.fospow/server_addr 192.168.122.100#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:49:42 np0005540825 podman[87891]: 2025-12-01 09:49:42.816344413 +0000 UTC m=+0.037553546 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:49:42 np0005540825 podman[87891]: 2025-12-01 09:49:42.911577235 +0000 UTC m=+0.132786308 container create 99a63205e8d4ca675970e253d6cedc4ef2251eb4a5c494df4dc69ee304ca2836 (image=quay.io/ceph/ceph:v19, name=tender_bouman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:49:42 np0005540825 systemd[1]: Started libpod-conmon-99a63205e8d4ca675970e253d6cedc4ef2251eb4a5c494df4dc69ee304ca2836.scope.
Dec  1 04:49:42 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:49:43 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6258ef573631823f3587c37430d5950f6bc28eeeb3a6b4d0d9dfcbace666b34/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:43 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6258ef573631823f3587c37430d5950f6bc28eeeb3a6b4d0d9dfcbace666b34/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:43 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6258ef573631823f3587c37430d5950f6bc28eeeb3a6b4d0d9dfcbace666b34/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:43 np0005540825 podman[87891]: 2025-12-01 09:49:43.026066558 +0000 UTC m=+0.247275691 container init 99a63205e8d4ca675970e253d6cedc4ef2251eb4a5c494df4dc69ee304ca2836 (image=quay.io/ceph/ceph:v19, name=tender_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec  1 04:49:43 np0005540825 podman[87891]: 2025-12-01 09:49:43.041069166 +0000 UTC m=+0.262278209 container start 99a63205e8d4ca675970e253d6cedc4ef2251eb4a5c494df4dc69ee304ca2836 (image=quay.io/ceph/ceph:v19, name=tender_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True)
Dec  1 04:49:43 np0005540825 podman[87891]: 2025-12-01 09:49:43.046374116 +0000 UTC m=+0.267583269 container attach 99a63205e8d4ca675970e253d6cedc4ef2251eb4a5c494df4dc69ee304ca2836 (image=quay.io/ceph/ceph:v19, name=tender_bouman, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:49:43 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:49:43 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-0.fospow/server_addr}] v 0)
Dec  1 04:49:43 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4060395120' entity='client.admin' 
Dec  1 04:49:43 np0005540825 systemd[1]: libpod-99a63205e8d4ca675970e253d6cedc4ef2251eb4a5c494df4dc69ee304ca2836.scope: Deactivated successfully.
Dec  1 04:49:43 np0005540825 podman[87891]: 2025-12-01 09:49:43.52974679 +0000 UTC m=+0.750955843 container died 99a63205e8d4ca675970e253d6cedc4ef2251eb4a5c494df4dc69ee304ca2836 (image=quay.io/ceph/ceph:v19, name=tender_bouman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec  1 04:49:43 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/4060395120' entity='client.admin' 
Dec  1 04:49:43 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 4.a deep-scrub starts
Dec  1 04:49:43 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 4.a deep-scrub ok
Dec  1 04:49:43 np0005540825 systemd[1]: var-lib-containers-storage-overlay-b6258ef573631823f3587c37430d5950f6bc28eeeb3a6b4d0d9dfcbace666b34-merged.mount: Deactivated successfully.
Dec  1 04:49:43 np0005540825 podman[87891]: 2025-12-01 09:49:43.615286275 +0000 UTC m=+0.836495328 container remove 99a63205e8d4ca675970e253d6cedc4ef2251eb4a5c494df4dc69ee304ca2836 (image=quay.io/ceph/ceph:v19, name=tender_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec  1 04:49:43 np0005540825 systemd[1]: libpod-conmon-99a63205e8d4ca675970e253d6cedc4ef2251eb4a5c494df4dc69ee304ca2836.scope: Deactivated successfully.
Dec  1 04:49:44 np0005540825 python3[87969]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-1.ymizfm/server_addr 192.168.122.101#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:49:44 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 3.d scrub starts
Dec  1 04:49:44 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 3.d scrub ok
Dec  1 04:49:44 np0005540825 podman[87970]: 2025-12-01 09:49:44.657228123 +0000 UTC m=+0.078160511 container create de3a9ee5cd5d6fbccc3f2ee4df8e88ea6cfc5a91a017d476f4eb4c0985fe8275 (image=quay.io/ceph/ceph:v19, name=friendly_hermann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:49:44 np0005540825 systemd[1]: Started libpod-conmon-de3a9ee5cd5d6fbccc3f2ee4df8e88ea6cfc5a91a017d476f4eb4c0985fe8275.scope.
Dec  1 04:49:44 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:49:44 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0950ca900c9a4ac39c68487d200f415ae82b77f42fda84448e53ca0bf62f6782/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:44 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0950ca900c9a4ac39c68487d200f415ae82b77f42fda84448e53ca0bf62f6782/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:44 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0950ca900c9a4ac39c68487d200f415ae82b77f42fda84448e53ca0bf62f6782/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:44 np0005540825 podman[87970]: 2025-12-01 09:49:44.715035495 +0000 UTC m=+0.135967913 container init de3a9ee5cd5d6fbccc3f2ee4df8e88ea6cfc5a91a017d476f4eb4c0985fe8275 (image=quay.io/ceph/ceph:v19, name=friendly_hermann, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:49:44 np0005540825 podman[87970]: 2025-12-01 09:49:44.720476699 +0000 UTC m=+0.141409087 container start de3a9ee5cd5d6fbccc3f2ee4df8e88ea6cfc5a91a017d476f4eb4c0985fe8275 (image=quay.io/ceph/ceph:v19, name=friendly_hermann, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:49:44 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v107: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:49:44 np0005540825 podman[87970]: 2025-12-01 09:49:44.724750172 +0000 UTC m=+0.145682560 container attach de3a9ee5cd5d6fbccc3f2ee4df8e88ea6cfc5a91a017d476f4eb4c0985fe8275 (image=quay.io/ceph/ceph:v19, name=friendly_hermann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec  1 04:49:44 np0005540825 podman[87970]: 2025-12-01 09:49:44.631980735 +0000 UTC m=+0.052913163 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:49:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-1.ymizfm/server_addr}] v 0)
Dec  1 04:49:45 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1919605233' entity='client.admin' 
Dec  1 04:49:45 np0005540825 systemd[1]: libpod-de3a9ee5cd5d6fbccc3f2ee4df8e88ea6cfc5a91a017d476f4eb4c0985fe8275.scope: Deactivated successfully.
Dec  1 04:49:45 np0005540825 podman[87970]: 2025-12-01 09:49:45.143629487 +0000 UTC m=+0.564561915 container died de3a9ee5cd5d6fbccc3f2ee4df8e88ea6cfc5a91a017d476f4eb4c0985fe8275 (image=quay.io/ceph/ceph:v19, name=friendly_hermann, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:49:45 np0005540825 systemd[1]: var-lib-containers-storage-overlay-0950ca900c9a4ac39c68487d200f415ae82b77f42fda84448e53ca0bf62f6782-merged.mount: Deactivated successfully.
Dec  1 04:49:45 np0005540825 podman[87970]: 2025-12-01 09:49:45.185009133 +0000 UTC m=+0.605941531 container remove de3a9ee5cd5d6fbccc3f2ee4df8e88ea6cfc5a91a017d476f4eb4c0985fe8275 (image=quay.io/ceph/ceph:v19, name=friendly_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:49:45 np0005540825 systemd[1]: libpod-conmon-de3a9ee5cd5d6fbccc3f2ee4df8e88ea6cfc5a91a017d476f4eb4c0985fe8275.scope: Deactivated successfully.
Dec  1 04:49:45 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Dec  1 04:49:45 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/1919605233' entity='client.admin' 
Dec  1 04:49:45 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Dec  1 04:49:46 np0005540825 python3[88049]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-2.kdtkls/server_addr 192.168.122.102#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:49:46 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 3.a scrub starts
Dec  1 04:49:46 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 3.a scrub ok
Dec  1 04:49:46 np0005540825 podman[88050]: 2025-12-01 09:49:46.694393414 +0000 UTC m=+0.119173908 container create e499368ba8a2469dbede373625a93547a42fab7282683afd1889e0ec54b4d667 (image=quay.io/ceph/ceph:v19, name=clever_bhabha, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  1 04:49:46 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v108: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:49:46 np0005540825 systemd[1]: Started libpod-conmon-e499368ba8a2469dbede373625a93547a42fab7282683afd1889e0ec54b4d667.scope.
Dec  1 04:49:46 np0005540825 podman[88050]: 2025-12-01 09:49:46.669181446 +0000 UTC m=+0.093962000 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:49:46 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:49:46 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6706f921e0709c4042625e05a1002ae544822117286eb79c381d13e74bc428bc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:46 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6706f921e0709c4042625e05a1002ae544822117286eb79c381d13e74bc428bc/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:46 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6706f921e0709c4042625e05a1002ae544822117286eb79c381d13e74bc428bc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:46 np0005540825 podman[88050]: 2025-12-01 09:49:46.79087866 +0000 UTC m=+0.215659234 container init e499368ba8a2469dbede373625a93547a42fab7282683afd1889e0ec54b4d667 (image=quay.io/ceph/ceph:v19, name=clever_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec  1 04:49:46 np0005540825 podman[88050]: 2025-12-01 09:49:46.802464497 +0000 UTC m=+0.227245001 container start e499368ba8a2469dbede373625a93547a42fab7282683afd1889e0ec54b4d667 (image=quay.io/ceph/ceph:v19, name=clever_bhabha, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:49:46 np0005540825 podman[88050]: 2025-12-01 09:49:46.806808642 +0000 UTC m=+0.231589216 container attach e499368ba8a2469dbede373625a93547a42fab7282683afd1889e0ec54b4d667 (image=quay.io/ceph/ceph:v19, name=clever_bhabha, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Dec  1 04:49:47 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-2.kdtkls/server_addr}] v 0)
Dec  1 04:49:47 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3251606112' entity='client.admin' 
Dec  1 04:49:47 np0005540825 systemd[1]: libpod-e499368ba8a2469dbede373625a93547a42fab7282683afd1889e0ec54b4d667.scope: Deactivated successfully.
Dec  1 04:49:47 np0005540825 podman[88050]: 2025-12-01 09:49:47.245897681 +0000 UTC m=+0.670678135 container died e499368ba8a2469dbede373625a93547a42fab7282683afd1889e0ec54b4d667 (image=quay.io/ceph/ceph:v19, name=clever_bhabha, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:49:47 np0005540825 systemd[1]: var-lib-containers-storage-overlay-6706f921e0709c4042625e05a1002ae544822117286eb79c381d13e74bc428bc-merged.mount: Deactivated successfully.
Dec  1 04:49:47 np0005540825 podman[88050]: 2025-12-01 09:49:47.323170548 +0000 UTC m=+0.747951002 container remove e499368ba8a2469dbede373625a93547a42fab7282683afd1889e0ec54b4d667 (image=quay.io/ceph/ceph:v19, name=clever_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:49:47 np0005540825 systemd[1]: libpod-conmon-e499368ba8a2469dbede373625a93547a42fab7282683afd1889e0ec54b4d667.scope: Deactivated successfully.
Dec  1 04:49:47 np0005540825 python3[88128]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:49:47 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/3251606112' entity='client.admin' 
Dec  1 04:49:47 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Dec  1 04:49:47 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Dec  1 04:49:47 np0005540825 podman[88129]: 2025-12-01 09:49:47.74379156 +0000 UTC m=+0.079371754 container create bdb33d83fe44852cff58305212d566927462f7a16b449def61a64af5a284f7be (image=quay.io/ceph/ceph:v19, name=interesting_wright, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:49:47 np0005540825 systemd[1]: Started libpod-conmon-bdb33d83fe44852cff58305212d566927462f7a16b449def61a64af5a284f7be.scope.
Dec  1 04:49:47 np0005540825 podman[88129]: 2025-12-01 09:49:47.692519201 +0000 UTC m=+0.028099485 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:49:47 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:49:47 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad31336b4eb68f43223db5b6293dda5c9fa37fcb1819ea180506f924f82eef33/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:47 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad31336b4eb68f43223db5b6293dda5c9fa37fcb1819ea180506f924f82eef33/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:47 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad31336b4eb68f43223db5b6293dda5c9fa37fcb1819ea180506f924f82eef33/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:47 np0005540825 podman[88129]: 2025-12-01 09:49:47.856564087 +0000 UTC m=+0.192144331 container init bdb33d83fe44852cff58305212d566927462f7a16b449def61a64af5a284f7be (image=quay.io/ceph/ceph:v19, name=interesting_wright, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  1 04:49:47 np0005540825 podman[88129]: 2025-12-01 09:49:47.86500257 +0000 UTC m=+0.200582774 container start bdb33d83fe44852cff58305212d566927462f7a16b449def61a64af5a284f7be (image=quay.io/ceph/ceph:v19, name=interesting_wright, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:49:47 np0005540825 podman[88129]: 2025-12-01 09:49:47.871383659 +0000 UTC m=+0.206963933 container attach bdb33d83fe44852cff58305212d566927462f7a16b449def61a64af5a284f7be (image=quay.io/ceph/ceph:v19, name=interesting_wright, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:49:48 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:49:48 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Dec  1 04:49:48 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2873131532' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec  1 04:49:48 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 4.d scrub starts
Dec  1 04:49:48 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 4.d scrub ok
Dec  1 04:49:48 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v109: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:49:49 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Dec  1 04:49:49 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Dec  1 04:49:49 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/2873131532' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec  1 04:49:49 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2873131532' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec  1 04:49:49 np0005540825 interesting_wright[88144]: module 'dashboard' is already disabled
Dec  1 04:49:50 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.fospow(active, since 2m), standbys: compute-2.kdtkls, compute-1.ymizfm
Dec  1 04:49:50 np0005540825 systemd[1]: libpod-bdb33d83fe44852cff58305212d566927462f7a16b449def61a64af5a284f7be.scope: Deactivated successfully.
Dec  1 04:49:50 np0005540825 podman[88129]: 2025-12-01 09:49:50.024215474 +0000 UTC m=+2.359795698 container died bdb33d83fe44852cff58305212d566927462f7a16b449def61a64af5a284f7be (image=quay.io/ceph/ceph:v19, name=interesting_wright, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:49:50 np0005540825 systemd[1]: var-lib-containers-storage-overlay-ad31336b4eb68f43223db5b6293dda5c9fa37fcb1819ea180506f924f82eef33-merged.mount: Deactivated successfully.
Dec  1 04:49:50 np0005540825 podman[88129]: 2025-12-01 09:49:50.06295457 +0000 UTC m=+2.398534744 container remove bdb33d83fe44852cff58305212d566927462f7a16b449def61a64af5a284f7be (image=quay.io/ceph/ceph:v19, name=interesting_wright, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:49:50 np0005540825 systemd[1]: libpod-conmon-bdb33d83fe44852cff58305212d566927462f7a16b449def61a64af5a284f7be.scope: Deactivated successfully.
Dec  1 04:49:50 np0005540825 python3[88207]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:49:50 np0005540825 podman[88208]: 2025-12-01 09:49:50.470225928 +0000 UTC m=+0.053055127 container create ef2f28d7d55a09fd5dff6d65594ac4c92f50ca1824dbf7f89b14c28ccfd7764c (image=quay.io/ceph/ceph:v19, name=objective_tharp, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:49:50 np0005540825 systemd[1]: Started libpod-conmon-ef2f28d7d55a09fd5dff6d65594ac4c92f50ca1824dbf7f89b14c28ccfd7764c.scope.
Dec  1 04:49:50 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:49:50 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/919602f6a2d0c6a601d02992634086a79c5e4aab6c23b49a68298e1c46fd993c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:50 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/919602f6a2d0c6a601d02992634086a79c5e4aab6c23b49a68298e1c46fd993c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:50 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/919602f6a2d0c6a601d02992634086a79c5e4aab6c23b49a68298e1c46fd993c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:50 np0005540825 podman[88208]: 2025-12-01 09:49:50.446081178 +0000 UTC m=+0.028910427 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:49:50 np0005540825 podman[88208]: 2025-12-01 09:49:50.545491141 +0000 UTC m=+0.128320390 container init ef2f28d7d55a09fd5dff6d65594ac4c92f50ca1824dbf7f89b14c28ccfd7764c (image=quay.io/ceph/ceph:v19, name=objective_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  1 04:49:50 np0005540825 podman[88208]: 2025-12-01 09:49:50.552023454 +0000 UTC m=+0.134852663 container start ef2f28d7d55a09fd5dff6d65594ac4c92f50ca1824dbf7f89b14c28ccfd7764c (image=quay.io/ceph/ceph:v19, name=objective_tharp, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:49:50 np0005540825 podman[88208]: 2025-12-01 09:49:50.554821568 +0000 UTC m=+0.137650828 container attach ef2f28d7d55a09fd5dff6d65594ac4c92f50ca1824dbf7f89b14c28ccfd7764c (image=quay.io/ceph/ceph:v19, name=objective_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:49:50 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 4.e deep-scrub starts
Dec  1 04:49:50 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 4.e deep-scrub ok
Dec  1 04:49:50 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v110: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:49:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Dec  1 04:49:50 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec  1 04:49:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:49:50 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:49:50 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Dec  1 04:49:50 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Dec  1 04:49:50 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/2873131532' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec  1 04:49:50 np0005540825 ceph-mon[74416]: from='mgr.14122 192.168.122.100:0/2266810210' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec  1 04:49:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Dec  1 04:49:50 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3031876280' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec  1 04:49:51 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 4.5 deep-scrub starts
Dec  1 04:49:51 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 4.5 deep-scrub ok
Dec  1 04:49:51 np0005540825 ceph-mon[74416]: Deploying daemon osd.2 on compute-2
Dec  1 04:49:51 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/3031876280' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec  1 04:49:51 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3031876280' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec  1 04:49:52 np0005540825 ceph-mgr[74709]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec  1 04:49:52 np0005540825 ceph-mgr[74709]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec  1 04:49:52 np0005540825 ceph-mgr[74709]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec  1 04:49:52 np0005540825 ceph-mgr[74709]: mgr respawn  1: '-n'
Dec  1 04:49:52 np0005540825 ceph-mgr[74709]: mgr respawn  2: 'mgr.compute-0.fospow'
Dec  1 04:49:52 np0005540825 ceph-mgr[74709]: mgr respawn  3: '-f'
Dec  1 04:49:52 np0005540825 ceph-mgr[74709]: mgr respawn  4: '--setuser'
Dec  1 04:49:52 np0005540825 ceph-mgr[74709]: mgr respawn  5: 'ceph'
Dec  1 04:49:52 np0005540825 ceph-mgr[74709]: mgr respawn  6: '--setgroup'
Dec  1 04:49:52 np0005540825 ceph-mgr[74709]: mgr respawn  7: 'ceph'
Dec  1 04:49:52 np0005540825 ceph-mgr[74709]: mgr respawn  8: '--default-log-to-file=false'
Dec  1 04:49:52 np0005540825 ceph-mgr[74709]: mgr respawn  9: '--default-log-to-journald=true'
Dec  1 04:49:52 np0005540825 ceph-mgr[74709]: mgr respawn  10: '--default-log-to-stderr=false'
Dec  1 04:49:52 np0005540825 ceph-mgr[74709]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec  1 04:49:52 np0005540825 ceph-mgr[74709]: mgr respawn  exe_path /proc/self/exe
Dec  1 04:49:52 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.fospow(active, since 2m), standbys: compute-2.kdtkls, compute-1.ymizfm
Dec  1 04:49:52 np0005540825 systemd[1]: libpod-ef2f28d7d55a09fd5dff6d65594ac4c92f50ca1824dbf7f89b14c28ccfd7764c.scope: Deactivated successfully.
Dec  1 04:49:52 np0005540825 podman[88208]: 2025-12-01 09:49:52.032025776 +0000 UTC m=+1.614855035 container died ef2f28d7d55a09fd5dff6d65594ac4c92f50ca1824dbf7f89b14c28ccfd7764c (image=quay.io/ceph/ceph:v19, name=objective_tharp, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  1 04:49:52 np0005540825 systemd[1]: var-lib-containers-storage-overlay-919602f6a2d0c6a601d02992634086a79c5e4aab6c23b49a68298e1c46fd993c-merged.mount: Deactivated successfully.
Dec  1 04:49:52 np0005540825 podman[88208]: 2025-12-01 09:49:52.078130567 +0000 UTC m=+1.660959776 container remove ef2f28d7d55a09fd5dff6d65594ac4c92f50ca1824dbf7f89b14c28ccfd7764c (image=quay.io/ceph/ceph:v19, name=objective_tharp, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:49:52 np0005540825 systemd[1]: libpod-conmon-ef2f28d7d55a09fd5dff6d65594ac4c92f50ca1824dbf7f89b14c28ccfd7764c.scope: Deactivated successfully.
Dec  1 04:49:52 np0005540825 systemd[1]: session-33.scope: Deactivated successfully.
Dec  1 04:49:52 np0005540825 systemd[1]: session-33.scope: Consumed 23.792s CPU time.
Dec  1 04:49:52 np0005540825 systemd[1]: session-32.scope: Deactivated successfully.
Dec  1 04:49:52 np0005540825 systemd[1]: session-27.scope: Deactivated successfully.
Dec  1 04:49:52 np0005540825 systemd[1]: session-21.scope: Deactivated successfully.
Dec  1 04:49:52 np0005540825 systemd[1]: session-23.scope: Deactivated successfully.
Dec  1 04:49:52 np0005540825 systemd-logind[789]: Session 21 logged out. Waiting for processes to exit.
Dec  1 04:49:52 np0005540825 systemd-logind[789]: Session 33 logged out. Waiting for processes to exit.
Dec  1 04:49:52 np0005540825 systemd-logind[789]: Session 27 logged out. Waiting for processes to exit.
Dec  1 04:49:52 np0005540825 systemd-logind[789]: Session 23 logged out. Waiting for processes to exit.
Dec  1 04:49:52 np0005540825 systemd-logind[789]: Session 32 logged out. Waiting for processes to exit.
Dec  1 04:49:52 np0005540825 systemd-logind[789]: Removed session 33.
Dec  1 04:49:52 np0005540825 systemd[1]: session-30.scope: Deactivated successfully.
Dec  1 04:49:52 np0005540825 systemd-logind[789]: Removed session 32.
Dec  1 04:49:52 np0005540825 systemd-logind[789]: Session 30 logged out. Waiting for processes to exit.
Dec  1 04:49:52 np0005540825 systemd[1]: session-31.scope: Deactivated successfully.
Dec  1 04:49:52 np0005540825 systemd-logind[789]: Session 31 logged out. Waiting for processes to exit.
Dec  1 04:49:52 np0005540825 systemd[1]: session-28.scope: Deactivated successfully.
Dec  1 04:49:52 np0005540825 systemd-logind[789]: Session 28 logged out. Waiting for processes to exit.
Dec  1 04:49:52 np0005540825 systemd[1]: session-25.scope: Deactivated successfully.
Dec  1 04:49:52 np0005540825 systemd-logind[789]: Session 25 logged out. Waiting for processes to exit.
Dec  1 04:49:52 np0005540825 systemd[1]: session-24.scope: Deactivated successfully.
Dec  1 04:49:52 np0005540825 systemd[1]: session-26.scope: Deactivated successfully.
Dec  1 04:49:52 np0005540825 systemd-logind[789]: Session 24 logged out. Waiting for processes to exit.
Dec  1 04:49:52 np0005540825 systemd-logind[789]: Session 26 logged out. Waiting for processes to exit.
Dec  1 04:49:52 np0005540825 systemd-logind[789]: Removed session 27.
Dec  1 04:49:52 np0005540825 systemd-logind[789]: Removed session 21.
Dec  1 04:49:52 np0005540825 systemd-logind[789]: Removed session 23.
Dec  1 04:49:52 np0005540825 systemd[1]: session-29.scope: Deactivated successfully.
Dec  1 04:49:52 np0005540825 systemd-logind[789]: Removed session 30.
Dec  1 04:49:52 np0005540825 systemd-logind[789]: Session 29 logged out. Waiting for processes to exit.
Dec  1 04:49:52 np0005540825 systemd-logind[789]: Removed session 31.
Dec  1 04:49:52 np0005540825 systemd-logind[789]: Removed session 28.
Dec  1 04:49:52 np0005540825 systemd-logind[789]: Removed session 25.
Dec  1 04:49:52 np0005540825 systemd-logind[789]: Removed session 24.
Dec  1 04:49:52 np0005540825 systemd-logind[789]: Removed session 26.
Dec  1 04:49:52 np0005540825 systemd-logind[789]: Removed session 29.
Dec  1 04:49:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ignoring --setuser ceph since I am not root
Dec  1 04:49:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ignoring --setgroup ceph since I am not root
Dec  1 04:49:52 np0005540825 ceph-mgr[74709]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec  1 04:49:52 np0005540825 ceph-mgr[74709]: pidfile_write: ignore empty --pid-file
Dec  1 04:49:52 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'alerts'
Dec  1 04:49:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:49:52.369+0000 7face0ad8140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  1 04:49:52 np0005540825 ceph-mgr[74709]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  1 04:49:52 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'balancer'
Dec  1 04:49:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:49:52.455+0000 7face0ad8140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  1 04:49:52 np0005540825 ceph-mgr[74709]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  1 04:49:52 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'cephadm'
Dec  1 04:49:52 np0005540825 python3[88305]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-username admin _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:49:52 np0005540825 podman[88306]: 2025-12-01 09:49:52.577131075 +0000 UTC m=+0.046093212 container create 99140e58bcfc0420a5044ddafefe98b441401aa46e052c25d7895435372730e1 (image=quay.io/ceph/ceph:v19, name=strange_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:49:52 np0005540825 systemd[1]: Started libpod-conmon-99140e58bcfc0420a5044ddafefe98b441401aa46e052c25d7895435372730e1.scope.
Dec  1 04:49:52 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:49:52 np0005540825 podman[88306]: 2025-12-01 09:49:52.555182403 +0000 UTC m=+0.024144570 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:49:52 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ed52393c10f0905d1e830deb5ee0e96fc772e45d9e678a27ce94276ba68bfa0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:52 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ed52393c10f0905d1e830deb5ee0e96fc772e45d9e678a27ce94276ba68bfa0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:52 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ed52393c10f0905d1e830deb5ee0e96fc772e45d9e678a27ce94276ba68bfa0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:49:52 np0005540825 podman[88306]: 2025-12-01 09:49:52.681085718 +0000 UTC m=+0.150047885 container init 99140e58bcfc0420a5044ddafefe98b441401aa46e052c25d7895435372730e1 (image=quay.io/ceph/ceph:v19, name=strange_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:49:52 np0005540825 podman[88306]: 2025-12-01 09:49:52.691781812 +0000 UTC m=+0.160743949 container start 99140e58bcfc0420a5044ddafefe98b441401aa46e052c25d7895435372730e1 (image=quay.io/ceph/ceph:v19, name=strange_sammet, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec  1 04:49:52 np0005540825 podman[88306]: 2025-12-01 09:49:52.697277867 +0000 UTC m=+0.166240004 container attach 99140e58bcfc0420a5044ddafefe98b441401aa46e052c25d7895435372730e1 (image=quay.io/ceph/ceph:v19, name=strange_sammet, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  1 04:49:52 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 3.5 deep-scrub starts
Dec  1 04:49:52 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 3.5 deep-scrub ok
Dec  1 04:49:53 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/3031876280' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec  1 04:49:53 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:49:53 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'crash'
Dec  1 04:49:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:49:53.240+0000 7face0ad8140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  1 04:49:53 np0005540825 ceph-mgr[74709]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  1 04:49:53 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'dashboard'
Dec  1 04:49:53 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 4.1 deep-scrub starts
Dec  1 04:49:53 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 4.1 deep-scrub ok
Dec  1 04:49:53 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'devicehealth'
Dec  1 04:49:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:49:53.824+0000 7face0ad8140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  1 04:49:53 np0005540825 ceph-mgr[74709]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  1 04:49:53 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'diskprediction_local'
Dec  1 04:49:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  1 04:49:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  1 04:49:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]:  from numpy import show_config as show_numpy_config
Dec  1 04:49:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:49:53.974+0000 7face0ad8140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  1 04:49:53 np0005540825 ceph-mgr[74709]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  1 04:49:53 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'influx'
Dec  1 04:49:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:49:54.038+0000 7face0ad8140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  1 04:49:54 np0005540825 ceph-mgr[74709]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  1 04:49:54 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'insights'
Dec  1 04:49:54 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'iostat'
Dec  1 04:49:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:49:54.167+0000 7face0ad8140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  1 04:49:54 np0005540825 ceph-mgr[74709]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  1 04:49:54 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'k8sevents'
Dec  1 04:49:54 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'localpool'
Dec  1 04:49:54 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'mds_autoscaler'
Dec  1 04:49:54 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Dec  1 04:49:54 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Dec  1 04:49:54 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'mirroring'
Dec  1 04:49:54 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'nfs'
Dec  1 04:49:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:49:55.119+0000 7face0ad8140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  1 04:49:55 np0005540825 ceph-mgr[74709]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  1 04:49:55 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'orchestrator'
Dec  1 04:49:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:49:55.323+0000 7face0ad8140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  1 04:49:55 np0005540825 ceph-mgr[74709]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  1 04:49:55 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'osd_perf_query'
Dec  1 04:49:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:49:55.394+0000 7face0ad8140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  1 04:49:55 np0005540825 ceph-mgr[74709]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  1 04:49:55 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'osd_support'
Dec  1 04:49:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:49:55.460+0000 7face0ad8140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  1 04:49:55 np0005540825 ceph-mgr[74709]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  1 04:49:55 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'pg_autoscaler'
Dec  1 04:49:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:49:55.538+0000 7face0ad8140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  1 04:49:55 np0005540825 ceph-mgr[74709]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  1 04:49:55 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'progress'
Dec  1 04:49:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:49:55.610+0000 7face0ad8140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  1 04:49:55 np0005540825 ceph-mgr[74709]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  1 04:49:55 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'prometheus'
Dec  1 04:49:55 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Dec  1 04:49:55 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Dec  1 04:49:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:49:55.983+0000 7face0ad8140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  1 04:49:55 np0005540825 ceph-mgr[74709]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  1 04:49:55 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'rbd_support'
Dec  1 04:49:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:49:56.080+0000 7face0ad8140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  1 04:49:56 np0005540825 ceph-mgr[74709]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  1 04:49:56 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'restful'
Dec  1 04:49:56 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'rgw'
Dec  1 04:49:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:49:56.526+0000 7face0ad8140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  1 04:49:56 np0005540825 ceph-mgr[74709]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  1 04:49:56 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'rook'
Dec  1 04:49:56 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Dec  1 04:49:56 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Dec  1 04:49:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:49:57.085+0000 7face0ad8140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  1 04:49:57 np0005540825 ceph-mgr[74709]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  1 04:49:57 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'selftest'
Dec  1 04:49:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:49:57.168+0000 7face0ad8140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  1 04:49:57 np0005540825 ceph-mgr[74709]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  1 04:49:57 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'snap_schedule'
Dec  1 04:49:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:49:57.258+0000 7face0ad8140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  1 04:49:57 np0005540825 ceph-mgr[74709]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  1 04:49:57 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'stats'
Dec  1 04:49:57 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'status'
Dec  1 04:49:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:49:57.429+0000 7face0ad8140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec  1 04:49:57 np0005540825 ceph-mgr[74709]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec  1 04:49:57 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'telegraf'
Dec  1 04:49:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:49:57.506+0000 7face0ad8140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  1 04:49:57 np0005540825 ceph-mgr[74709]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  1 04:49:57 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'telemetry'
Dec  1 04:49:57 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Dec  1 04:49:57 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec  1 04:49:57 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 4.18 deep-scrub starts
Dec  1 04:49:57 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 4.18 deep-scrub ok
Dec  1 04:49:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:49:57.676+0000 7face0ad8140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  1 04:49:57 np0005540825 ceph-mgr[74709]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  1 04:49:57 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'test_orchestrator'
Dec  1 04:49:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:49:57.939+0000 7face0ad8140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  1 04:49:57 np0005540825 ceph-mgr[74709]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  1 04:49:57 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'volumes'
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:49:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:49:58.200+0000 7face0ad8140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'zabbix'
Dec  1 04:49:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:49:58.270+0000 7face0ad8140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: log_channel(cluster) log [INF] : Active manager daemon compute-0.fospow restarted
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.fospow
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: ms_deliver_dispatch: unhandled message 0x5602ca5bd860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e38 e38: 3 total, 2 up, 3 in
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: mgr handle_mgr_map Activating!
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: mgr handle_mgr_map I am now activating
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 2 up, 3 in
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mgrmap e14: compute-0.fospow(active, starting, since 0.0399499s), standbys: compute-2.kdtkls, compute-1.ymizfm
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]} v 0)
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e38 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-2,root=default}
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.fospow", "id": "compute-0.fospow"} v 0)
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mgr metadata", "who": "compute-0.fospow", "id": "compute-0.fospow"}]: dispatch
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.kdtkls", "id": "compute-2.kdtkls"} v 0)
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mgr metadata", "who": "compute-2.kdtkls", "id": "compute-2.kdtkls"}]: dispatch
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.ymizfm", "id": "compute-1.ymizfm"} v 0)
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mgr metadata", "who": "compute-1.ymizfm", "id": "compute-1.ymizfm"}]: dispatch
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).mds e1 all = 1
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: mgr load_all_metadata Skipping incomplete metadata entry
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: log_channel(cluster) log [INF] : Manager daemon compute-0.fospow is now available
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: balancer
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [balancer INFO root] Starting
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_09:49:58
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: cephadm
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: crash
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: dashboard
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO access_control] Loading user roles DB version=2
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO sso] Loading SSO DB version=1
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: devicehealth
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO root] Configured CherryPy, starting engine...
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [devicehealth INFO root] Starting
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: iostat
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: nfs
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: orchestrator
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: pg_autoscaler
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: progress
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [progress INFO root] Loading...
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7fac6368aee0>, <progress.module.GhostEvent object at 0x7fac5ee21190>, <progress.module.GhostEvent object at 0x7fac5ee211c0>, <progress.module.GhostEvent object at 0x7fac5ee211f0>, <progress.module.GhostEvent object at 0x7fac5ee21220>, <progress.module.GhostEvent object at 0x7fac5ee21250>, <progress.module.GhostEvent object at 0x7fac5ee21280>, <progress.module.GhostEvent object at 0x7fac5ee212b0>, <progress.module.GhostEvent object at 0x7fac5ee212e0>, <progress.module.GhostEvent object at 0x7fac5ee21310>, <progress.module.GhostEvent object at 0x7fac5ee21340>, <progress.module.GhostEvent object at 0x7fac5ee21370>, <progress.module.GhostEvent object at 0x7fac5ee213a0>] historic events
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [progress INFO root] Loaded OSDMap, ready.
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] recovery thread starting
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] starting setup
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: rbd_support
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: restful
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [restful INFO root] server_addr: :: server_port: 8003
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: status
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fospow/mirror_snapshot_schedule"} v 0)
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fospow/mirror_snapshot_schedule"}]: dispatch
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: telemetry
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [restful WARNING root] server not running: no certificate configured
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] PerfHandler: starting
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_task_task: vms, start_after=
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_task_task: volumes, start_after=
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_task_task: backups, start_after=
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.ymizfm restarted
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.ymizfm started
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_task_task: images, start_after=
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TaskHandler: starting
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fospow/trash_purge_schedule"} v 0)
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fospow/trash_purge_schedule"}]: dispatch
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: volumes
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] setup complete
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: from='osd.2 [v2:192.168.122.102:6800/1185161015,v1:192.168.122.102:6801/1185161015]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: Active manager daemon compute-0.fospow restarted
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: Activating manager daemon compute-0.fospow
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: from='osd.2 [v2:192.168.122.102:6800/1185161015,v1:192.168.122.102:6801/1185161015]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: Manager daemon compute-0.fospow is now available
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fospow/mirror_snapshot_schedule"}]: dispatch
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fospow/trash_purge_schedule"}]: dispatch
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Dec  1 04:49:58 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 4.1a deep-scrub starts
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Dec  1 04:49:58 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 4.1a deep-scrub ok
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Dec  1 04:49:58 np0005540825 systemd-logind[789]: New session 34 of user ceph-admin.
Dec  1 04:49:58 np0005540825 systemd[1]: Started Session 34 of User ceph-admin.
Dec  1 04:49:58 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.module] Engine started.
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.kdtkls restarted
Dec  1 04:49:58 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.kdtkls started
Dec  1 04:49:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Dec  1 04:49:59 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Dec  1 04:49:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e39 e39: 3 total, 2 up, 3 in
Dec  1 04:49:59 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 2 up, 3 in
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 4.c scrub starts
Dec  1 04:49:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  1 04:49:59 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  1 04:49:59 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[7.1f( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=11.101078987s) [] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 103.204299927s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[7.1f( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=11.101078987s) [] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.204299927s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[6.1e( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.499829292s) [] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 101.603179932s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[6.1e( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.499829292s) [] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.603179932s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[4.1f( empty local-lis/les=32/33 n=0 ec=30/18 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=10.055423737s) [] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 102.159027100s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[6.1c( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.504407883s) [] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 101.608108521s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[6.1c( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.504407883s) [] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.608108521s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[6.12( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.503186226s) [] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 101.607040405s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[2.18( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=39 pruub=11.899868965s) [] r=-1 lpr=39 pi=[28,39)/1 crt=0'0 mlcod 0'0 active pruub 104.003723145s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[6.12( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.503186226s) [] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.607040405s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[2.18( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=39 pruub=11.899868965s) [] r=-1 lpr=39 pi=[28,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 104.003723145s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[3.15( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=10.058797836s) [] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 102.162765503s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[3.15( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=10.058797836s) [] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 102.162765503s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[6.17( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.503081322s) [] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 101.607101440s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[6.17( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.503081322s) [] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.607101440s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[7.16( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=11.104699135s) [] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 103.208869934s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[7.16( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=11.104699135s) [] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.208869934s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[7.11( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=11.104601860s) [] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 103.208847046s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[7.11( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=11.104601860s) [] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.208847046s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[4.15( empty local-lis/les=32/33 n=0 ec=30/18 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=10.058367729s) [] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 102.162734985s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[4.15( empty local-lis/les=32/33 n=0 ec=30/18 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=10.058367729s) [] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 102.162734985s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[2.12( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=39 pruub=11.899291039s) [] r=-1 lpr=39 pi=[28,39)/1 crt=0'0 mlcod 0'0 active pruub 104.003753662s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[2.12( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=39 pruub=11.899291039s) [] r=-1 lpr=39 pi=[28,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 104.003753662s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[3.11( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=10.058318138s) [] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 102.162910461s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[3.e( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=10.058312416s) [] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 102.162918091s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[3.11( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=10.058318138s) [] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 102.162910461s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[3.e( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=10.058312416s) [] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 102.162918091s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[2.f( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=39 pruub=11.898815155s) [] r=-1 lpr=39 pi=[28,39)/1 crt=0'0 mlcod 0'0 active pruub 104.003501892s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[4.9( empty local-lis/les=32/33 n=0 ec=30/18 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=10.058243752s) [] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 102.162918091s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[2.f( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=39 pruub=11.898815155s) [] r=-1 lpr=39 pi=[28,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 104.003501892s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[4.9( empty local-lis/les=32/33 n=0 ec=30/18 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=10.058243752s) [] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 102.162918091s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[4.8( empty local-lis/les=32/33 n=0 ec=30/18 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=10.058240891s) [] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 102.163063049s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[4.8( empty local-lis/les=32/33 n=0 ec=30/18 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=10.058240891s) [] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 102.163063049s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[4.1f( empty local-lis/les=32/33 n=0 ec=30/18 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=10.055423737s) [] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 102.159027100s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[2.b( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=39 pruub=11.898522377s) [] r=-1 lpr=39 pi=[28,39)/1 crt=0'0 mlcod 0'0 active pruub 104.003463745s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[2.b( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=39 pruub=11.898522377s) [] r=-1 lpr=39 pi=[28,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 104.003463745s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[5.4( empty local-lis/les=35/36 n=0 ec=32/19 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.503026009s) [] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 101.608024597s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[5.4( empty local-lis/les=35/36 n=0 ec=32/19 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.503026009s) [] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.608024597s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[7.5( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=11.104071617s) [] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 103.209121704s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[7.5( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=11.104071617s) [] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.209121704s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[2.5( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=39 pruub=11.898261070s) [] r=-1 lpr=39 pi=[28,39)/1 crt=0'0 mlcod 0'0 active pruub 104.003417969s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[2.5( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=39 pruub=11.898261070s) [] r=-1 lpr=39 pi=[28,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 104.003417969s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[4.1( empty local-lis/les=32/33 n=0 ec=30/18 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=10.058191299s) [] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 102.163536072s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[4.1( empty local-lis/les=32/33 n=0 ec=30/18 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=10.058191299s) [] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 102.163536072s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[5.e( empty local-lis/les=35/36 n=0 ec=32/19 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.503227234s) [] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 101.608665466s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[5.e( empty local-lis/les=35/36 n=0 ec=32/19 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.503227234s) [] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.608665466s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[3.9( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=10.058046341s) [] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 102.163536072s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[3.9( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=10.058046341s) [] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 102.163536072s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[3.1a( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=10.058073044s) [] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 102.163612366s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[2.1c( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=39 pruub=11.892655373s) [] r=-1 lpr=39 pi=[28,39)/1 crt=0'0 mlcod 0'0 active pruub 103.998229980s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[2.1c( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=39 pruub=11.892655373s) [] r=-1 lpr=39 pi=[28,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.998229980s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[3.1a( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=10.058073044s) [] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 102.163612366s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[3.1d( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=10.057978630s) [] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 102.163658142s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[3.1d( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=10.057978630s) [] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 102.163658142s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[5.1a( empty local-lis/les=35/36 n=0 ec=32/19 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.503061295s) [] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 101.608818054s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[2.1d( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=39 pruub=11.892488480s) [] r=-1 lpr=39 pi=[28,39)/1 crt=0'0 mlcod 0'0 active pruub 103.998245239s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[5.1a( empty local-lis/les=35/36 n=0 ec=32/19 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.503061295s) [] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.608818054s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 39 pg[2.1d( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=39 pruub=11.892488480s) [] r=-1 lpr=39 pi=[28,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.998245239s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:49:59 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 4.c scrub ok
Dec  1 04:49:59 np0005540825 podman[88613]: 2025-12-01 09:49:59.712778021 +0000 UTC m=+0.154684628 container exec 04e54403a63b389bbbec1024baf07fe083822adf8debfd9c927a52a06f70b8a1 (image=quay.io/ceph/ceph:v19, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:49:59 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.14310 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-username", "value": "admin", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 04:49:59 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mgrmap e15: compute-0.fospow(active, since 1.44272s), standbys: compute-1.ymizfm, compute-2.kdtkls
Dec  1 04:49:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_USERNAME}] v 0)
Dec  1 04:49:59 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v4: 193 pgs: 164 active+clean, 29 unknown; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:49:59 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1185161015; not ready for session (expect reconnect)
Dec  1 04:49:59 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:49:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  1 04:49:59 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  1 04:49:59 np0005540825 strange_sammet[88322]: Option GRAFANA_API_USERNAME updated
Dec  1 04:49:59 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  1 04:49:59 np0005540825 systemd[1]: libpod-99140e58bcfc0420a5044ddafefe98b441401aa46e052c25d7895435372730e1.scope: Deactivated successfully.
Dec  1 04:49:59 np0005540825 podman[88306]: 2025-12-01 09:49:59.788762514 +0000 UTC m=+7.257724691 container died 99140e58bcfc0420a5044ddafefe98b441401aa46e052c25d7895435372730e1 (image=quay.io/ceph/ceph:v19, name=strange_sammet, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:49:59 np0005540825 podman[88613]: 2025-12-01 09:49:59.814703251 +0000 UTC m=+0.256609858 container exec_died 04e54403a63b389bbbec1024baf07fe083822adf8debfd9c927a52a06f70b8a1 (image=quay.io/ceph/ceph:v19, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mon-compute-0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:49:59 np0005540825 systemd[1]: var-lib-containers-storage-overlay-7ed52393c10f0905d1e830deb5ee0e96fc772e45d9e678a27ce94276ba68bfa0-merged.mount: Deactivated successfully.
Dec  1 04:49:59 np0005540825 podman[88306]: 2025-12-01 09:49:59.867972022 +0000 UTC m=+7.336934159 container remove 99140e58bcfc0420a5044ddafefe98b441401aa46e052c25d7895435372730e1 (image=quay.io/ceph/ceph:v19, name=strange_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  1 04:49:59 np0005540825 systemd[1]: libpod-conmon-99140e58bcfc0420a5044ddafefe98b441401aa46e052c25d7895435372730e1.scope: Deactivated successfully.
Dec  1 04:49:59 np0005540825 ceph-mgr[74709]: [cephadm INFO cherrypy.error] [01/Dec/2025:09:49:59] ENGINE Bus STARTING
Dec  1 04:49:59 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : [01/Dec/2025:09:49:59] ENGINE Bus STARTING
Dec  1 04:50:00 np0005540825 ceph-mon[74416]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec  1 04:50:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  1 04:50:00 np0005540825 ceph-mgr[74709]: [cephadm INFO cherrypy.error] [01/Dec/2025:09:50:00] ENGINE Serving on http://192.168.122.100:8765
Dec  1 04:50:00 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : [01/Dec/2025:09:50:00] ENGINE Serving on http://192.168.122.100:8765
Dec  1 04:50:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  1 04:50:00 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  1 04:50:00 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  1 04:50:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:50:00 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:00 np0005540825 ceph-mgr[74709]: [cephadm INFO cherrypy.error] [01/Dec/2025:09:50:00] ENGINE Serving on https://192.168.122.100:7150
Dec  1 04:50:00 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : [01/Dec/2025:09:50:00] ENGINE Serving on https://192.168.122.100:7150
Dec  1 04:50:00 np0005540825 ceph-mgr[74709]: [cephadm INFO cherrypy.error] [01/Dec/2025:09:50:00] ENGINE Bus STARTED
Dec  1 04:50:00 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : [01/Dec/2025:09:50:00] ENGINE Bus STARTED
Dec  1 04:50:00 np0005540825 ceph-mgr[74709]: [cephadm INFO cherrypy.error] [01/Dec/2025:09:50:00] ENGINE Client ('192.168.122.100', 46558) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  1 04:50:00 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : [01/Dec/2025:09:50:00] ENGINE Client ('192.168.122.100', 46558) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  1 04:50:00 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:00 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:50:00 np0005540825 python3[88739]: ansible-ansible.legacy.command Invoked with stdin=/home/grafana_password.yml stdin_add_newline=False _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-password -i - _uses_shell=False strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None
Dec  1 04:50:00 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:00 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v5: 193 pgs: 164 active+clean, 29 unknown; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:50:00 np0005540825 podman[88759]: 2025-12-01 09:50:00.293033121 +0000 UTC m=+0.025533457 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:50:00 np0005540825 podman[88759]: 2025-12-01 09:50:00.411995512 +0000 UTC m=+0.144495788 container create d55358bed567ed3d94e58c9153b79344fdc0d2896f42137d81baf068de406e2b (image=quay.io/ceph/ceph:v19, name=eager_sammet, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec  1 04:50:00 np0005540825 systemd[1]: Started libpod-conmon-d55358bed567ed3d94e58c9153b79344fdc0d2896f42137d81baf068de406e2b.scope.
Dec  1 04:50:00 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:50:00 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d70ef037943a81f24fdda1d4f0f3738243783a4b71bbdf3ab947ddf14d04cd12/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:00 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d70ef037943a81f24fdda1d4f0f3738243783a4b71bbdf3ab947ddf14d04cd12/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:00 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d70ef037943a81f24fdda1d4f0f3738243783a4b71bbdf3ab947ddf14d04cd12/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:00 np0005540825 podman[88759]: 2025-12-01 09:50:00.503900727 +0000 UTC m=+0.236401033 container init d55358bed567ed3d94e58c9153b79344fdc0d2896f42137d81baf068de406e2b (image=quay.io/ceph/ceph:v19, name=eager_sammet, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:50:00 np0005540825 podman[88759]: 2025-12-01 09:50:00.511235041 +0000 UTC m=+0.243735317 container start d55358bed567ed3d94e58c9153b79344fdc0d2896f42137d81baf068de406e2b (image=quay.io/ceph/ceph:v19, name=eager_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  1 04:50:00 np0005540825 ceph-mgr[74709]: [devicehealth INFO root] Check health
Dec  1 04:50:00 np0005540825 podman[88759]: 2025-12-01 09:50:00.530207803 +0000 UTC m=+0.262708079 container attach d55358bed567ed3d94e58c9153b79344fdc0d2896f42137d81baf068de406e2b (image=quay.io/ceph/ceph:v19, name=eager_sammet, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:50:00 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Dec  1 04:50:00 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Dec  1 04:50:00 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1185161015; not ready for session (expect reconnect)
Dec  1 04:50:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  1 04:50:00 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  1 04:50:00 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  1 04:50:00 np0005540825 ceph-mon[74416]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Dec  1 04:50:00 np0005540825 ceph-mon[74416]: from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:00 np0005540825 ceph-mon[74416]: [01/Dec/2025:09:49:59] ENGINE Bus STARTING
Dec  1 04:50:00 np0005540825 ceph-mon[74416]: overall HEALTH_OK
Dec  1 04:50:00 np0005540825 ceph-mon[74416]: [01/Dec/2025:09:50:00] ENGINE Serving on http://192.168.122.100:8765
Dec  1 04:50:00 np0005540825 ceph-mon[74416]: from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:00 np0005540825 ceph-mon[74416]: from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:00 np0005540825 ceph-mon[74416]: from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:00 np0005540825 ceph-mon[74416]: [01/Dec/2025:09:50:00] ENGINE Serving on https://192.168.122.100:7150
Dec  1 04:50:00 np0005540825 ceph-mon[74416]: [01/Dec/2025:09:50:00] ENGINE Bus STARTED
Dec  1 04:50:00 np0005540825 ceph-mon[74416]: [01/Dec/2025:09:50:00] ENGINE Client ('192.168.122.100', 46558) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  1 04:50:00 np0005540825 ceph-mon[74416]: from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:00 np0005540825 ceph-mon[74416]: from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:00 np0005540825 ceph-mon[74416]: from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:00 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.14340 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 04:50:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_PASSWORD}] v 0)
Dec  1 04:50:00 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:00 np0005540825 eager_sammet[88825]: Option GRAFANA_API_PASSWORD updated
Dec  1 04:50:00 np0005540825 systemd[1]: libpod-d55358bed567ed3d94e58c9153b79344fdc0d2896f42137d81baf068de406e2b.scope: Deactivated successfully.
Dec  1 04:50:00 np0005540825 podman[88759]: 2025-12-01 09:50:00.949970692 +0000 UTC m=+0.682470998 container died d55358bed567ed3d94e58c9153b79344fdc0d2896f42137d81baf068de406e2b (image=quay.io/ceph/ceph:v19, name=eager_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec  1 04:50:00 np0005540825 systemd[1]: var-lib-containers-storage-overlay-d70ef037943a81f24fdda1d4f0f3738243783a4b71bbdf3ab947ddf14d04cd12-merged.mount: Deactivated successfully.
Dec  1 04:50:00 np0005540825 podman[88759]: 2025-12-01 09:50:00.991401379 +0000 UTC m=+0.723901655 container remove d55358bed567ed3d94e58c9153b79344fdc0d2896f42137d81baf068de406e2b (image=quay.io/ceph/ceph:v19, name=eager_sammet, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 04:50:01 np0005540825 systemd[1]: libpod-conmon-d55358bed567ed3d94e58c9153b79344fdc0d2896f42137d81baf068de406e2b.scope: Deactivated successfully.
Dec  1 04:50:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  1 04:50:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:50:01 np0005540825 python3[88979]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-alertmanager-api-host http://192.168.122.100:9093#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:50:01 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mgrmap e16: compute-0.fospow(active, since 3s), standbys: compute-1.ymizfm, compute-2.kdtkls
Dec  1 04:50:01 np0005540825 podman[88999]: 2025-12-01 09:50:01.576933298 +0000 UTC m=+0.062980229 container create eda77fa22b37cbf62c38f05ea6e4f986340c4fc9ebcba5cac61f6b839d25545d (image=quay.io/ceph/ceph:v19, name=sharp_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec  1 04:50:01 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:01 np0005540825 systemd[1]: Started libpod-conmon-eda77fa22b37cbf62c38f05ea6e4f986340c4fc9ebcba5cac61f6b839d25545d.scope.
Dec  1 04:50:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  1 04:50:01 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:50:01 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Dec  1 04:50:01 np0005540825 podman[88999]: 2025-12-01 09:50:01.548199827 +0000 UTC m=+0.034246788 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:50:01 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:50:01 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Dec  1 04:50:01 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f599d6454664d908a2dc9a716430a711584a4d5921f2271990e197b2264ee3be/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:01 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f599d6454664d908a2dc9a716430a711584a4d5921f2271990e197b2264ee3be/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:01 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f599d6454664d908a2dc9a716430a711584a4d5921f2271990e197b2264ee3be/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:01 np0005540825 podman[88999]: 2025-12-01 09:50:01.657837881 +0000 UTC m=+0.143884852 container init eda77fa22b37cbf62c38f05ea6e4f986340c4fc9ebcba5cac61f6b839d25545d (image=quay.io/ceph/ceph:v19, name=sharp_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  1 04:50:01 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Dec  1 04:50:01 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec  1 04:50:01 np0005540825 podman[88999]: 2025-12-01 09:50:01.665102123 +0000 UTC m=+0.151149064 container start eda77fa22b37cbf62c38f05ea6e4f986340c4fc9ebcba5cac61f6b839d25545d (image=quay.io/ceph/ceph:v19, name=sharp_blackwell, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:50:01 np0005540825 podman[88999]: 2025-12-01 09:50:01.668949755 +0000 UTC m=+0.154996686 container attach eda77fa22b37cbf62c38f05ea6e4f986340c4fc9ebcba5cac61f6b839d25545d (image=quay.io/ceph/ceph:v19, name=sharp_blackwell, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:50:01 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec  1 04:50:01 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec  1 04:50:01 np0005540825 ceph-mgr[74709]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to 128.0M
Dec  1 04:50:01 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to 128.0M
Dec  1 04:50:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec  1 04:50:01 np0005540825 ceph-mgr[74709]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 128.0M
Dec  1 04:50:01 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 128.0M
Dec  1 04:50:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec  1 04:50:01 np0005540825 ceph-mgr[74709]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-1 to 134220595: error parsing value: Value '134220595' is below minimum 939524096
Dec  1 04:50:01 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-1 to 134220595: error parsing value: Value '134220595' is below minimum 939524096
Dec  1 04:50:01 np0005540825 ceph-mgr[74709]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134220595: error parsing value: Value '134220595' is below minimum 939524096
Dec  1 04:50:01 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134220595: error parsing value: Value '134220595' is below minimum 939524096
Dec  1 04:50:01 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1185161015; not ready for session (expect reconnect)
Dec  1 04:50:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  1 04:50:01 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  1 04:50:01 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  1 04:50:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  1 04:50:01 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  1 04:50:01 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Dec  1 04:50:01 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec  1 04:50:01 np0005540825 ceph-mgr[74709]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 127.9M
Dec  1 04:50:01 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 127.9M
Dec  1 04:50:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec  1 04:50:01 np0005540825 ceph-mgr[74709]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Dec  1 04:50:01 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Dec  1 04:50:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:50:01 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:50:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 04:50:01 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 04:50:01 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec  1 04:50:01 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec  1 04:50:01 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec  1 04:50:01 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec  1 04:50:01 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec  1 04:50:01 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec  1 04:50:01 np0005540825 ceph-mon[74416]: from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:01 np0005540825 ceph-mon[74416]: from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:01 np0005540825 ceph-mon[74416]: from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:01 np0005540825 ceph-mon[74416]: from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:01 np0005540825 ceph-mon[74416]: from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec  1 04:50:01 np0005540825 ceph-mon[74416]: from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:01 np0005540825 ceph-mon[74416]: from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec  1 04:50:01 np0005540825 ceph-mon[74416]: from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:02 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.14346 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 04:50:02 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ALERTMANAGER_API_HOST}] v 0)
Dec  1 04:50:02 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:02 np0005540825 sharp_blackwell[89014]: Option ALERTMANAGER_API_HOST updated
Dec  1 04:50:02 np0005540825 systemd[1]: libpod-eda77fa22b37cbf62c38f05ea6e4f986340c4fc9ebcba5cac61f6b839d25545d.scope: Deactivated successfully.
Dec  1 04:50:02 np0005540825 podman[88999]: 2025-12-01 09:50:02.102673034 +0000 UTC m=+0.588719975 container died eda77fa22b37cbf62c38f05ea6e4f986340c4fc9ebcba5cac61f6b839d25545d (image=quay.io/ceph/ceph:v19, name=sharp_blackwell, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 04:50:02 np0005540825 systemd[1]: var-lib-containers-storage-overlay-f599d6454664d908a2dc9a716430a711584a4d5921f2271990e197b2264ee3be-merged.mount: Deactivated successfully.
Dec  1 04:50:02 np0005540825 podman[88999]: 2025-12-01 09:50:02.162809257 +0000 UTC m=+0.648856178 container remove eda77fa22b37cbf62c38f05ea6e4f986340c4fc9ebcba5cac61f6b839d25545d (image=quay.io/ceph/ceph:v19, name=sharp_blackwell, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 04:50:02 np0005540825 systemd[1]: libpod-conmon-eda77fa22b37cbf62c38f05ea6e4f986340c4fc9ebcba5cac61f6b839d25545d.scope: Deactivated successfully.
Dec  1 04:50:02 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v6: 193 pgs: 164 active+clean, 29 unknown; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:50:02 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.conf
Dec  1 04:50:02 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.conf
Dec  1 04:50:02 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.conf
Dec  1 04:50:02 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.conf
Dec  1 04:50:02 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.conf
Dec  1 04:50:02 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.conf
Dec  1 04:50:02 np0005540825 python3[89261]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-prometheus-api-host http://192.168.122.100:9092#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:50:02 np0005540825 podman[89322]: 2025-12-01 09:50:02.540578563 +0000 UTC m=+0.059362563 container create 8846b8de9998b87adf989c1f59f4803953ef4305dd35003801bc132144fee7e2 (image=quay.io/ceph/ceph:v19, name=silly_panini, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:50:02 np0005540825 systemd[1]: Started libpod-conmon-8846b8de9998b87adf989c1f59f4803953ef4305dd35003801bc132144fee7e2.scope.
Dec  1 04:50:02 np0005540825 podman[89322]: 2025-12-01 09:50:02.505381711 +0000 UTC m=+0.024165741 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:50:02 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:50:02 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc0a18cb43cc762890ce16602ae5a20691084ad9d5c461640f298f6c011c0708/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:02 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc0a18cb43cc762890ce16602ae5a20691084ad9d5c461640f298f6c011c0708/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:02 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc0a18cb43cc762890ce16602ae5a20691084ad9d5c461640f298f6c011c0708/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:02 np0005540825 podman[89322]: 2025-12-01 09:50:02.647607258 +0000 UTC m=+0.166391288 container init 8846b8de9998b87adf989c1f59f4803953ef4305dd35003801bc132144fee7e2 (image=quay.io/ceph/ceph:v19, name=silly_panini, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec  1 04:50:02 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Dec  1 04:50:02 np0005540825 podman[89322]: 2025-12-01 09:50:02.653935796 +0000 UTC m=+0.172719796 container start 8846b8de9998b87adf989c1f59f4803953ef4305dd35003801bc132144fee7e2 (image=quay.io/ceph/ceph:v19, name=silly_panini, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:50:02 np0005540825 podman[89322]: 2025-12-01 09:50:02.657148561 +0000 UTC m=+0.175932591 container attach 8846b8de9998b87adf989c1f59f4803953ef4305dd35003801bc132144fee7e2 (image=quay.io/ceph/ceph:v19, name=silly_panini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  1 04:50:02 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Dec  1 04:50:02 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1185161015; not ready for session (expect reconnect)
Dec  1 04:50:02 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  1 04:50:02 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  1 04:50:02 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  1 04:50:02 np0005540825 ceph-mon[74416]: Adjusting osd_memory_target on compute-1 to 128.0M
Dec  1 04:50:02 np0005540825 ceph-mon[74416]: Adjusting osd_memory_target on compute-0 to 128.0M
Dec  1 04:50:02 np0005540825 ceph-mon[74416]: Unable to set osd_memory_target on compute-1 to 134220595: error parsing value: Value '134220595' is below minimum 939524096
Dec  1 04:50:02 np0005540825 ceph-mon[74416]: Unable to set osd_memory_target on compute-0 to 134220595: error parsing value: Value '134220595' is below minimum 939524096
Dec  1 04:50:02 np0005540825 ceph-mon[74416]: from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:02 np0005540825 ceph-mon[74416]: from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec  1 04:50:02 np0005540825 ceph-mon[74416]: Adjusting osd_memory_target on compute-2 to 127.9M
Dec  1 04:50:02 np0005540825 ceph-mon[74416]: Unable to set osd_memory_target on compute-2 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Dec  1 04:50:02 np0005540825 ceph-mon[74416]: from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 04:50:02 np0005540825 ceph-mon[74416]: Updating compute-0:/etc/ceph/ceph.conf
Dec  1 04:50:02 np0005540825 ceph-mon[74416]: Updating compute-1:/etc/ceph/ceph.conf
Dec  1 04:50:02 np0005540825 ceph-mon[74416]: Updating compute-2:/etc/ceph/ceph.conf
Dec  1 04:50:02 np0005540825 ceph-mon[74416]: from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:02 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  1 04:50:02 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  1 04:50:02 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.14352 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 04:50:02 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/PROMETHEUS_API_HOST}] v 0)
Dec  1 04:50:03 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:03 np0005540825 silly_panini[89384]: Option PROMETHEUS_API_HOST updated
Dec  1 04:50:03 np0005540825 systemd[1]: libpod-8846b8de9998b87adf989c1f59f4803953ef4305dd35003801bc132144fee7e2.scope: Deactivated successfully.
Dec  1 04:50:03 np0005540825 podman[89322]: 2025-12-01 09:50:03.026198236 +0000 UTC m=+0.544982246 container died 8846b8de9998b87adf989c1f59f4803953ef4305dd35003801bc132144fee7e2 (image=quay.io/ceph/ceph:v19, name=silly_panini, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:50:03 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  1 04:50:03 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  1 04:50:03 np0005540825 systemd[1]: var-lib-containers-storage-overlay-cc0a18cb43cc762890ce16602ae5a20691084ad9d5c461640f298f6c011c0708-merged.mount: Deactivated successfully.
Dec  1 04:50:03 np0005540825 podman[89322]: 2025-12-01 09:50:03.095389169 +0000 UTC m=+0.614173219 container remove 8846b8de9998b87adf989c1f59f4803953ef4305dd35003801bc132144fee7e2 (image=quay.io/ceph/ceph:v19, name=silly_panini, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  1 04:50:03 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mgrmap e17: compute-0.fospow(active, since 4s), standbys: compute-1.ymizfm, compute-2.kdtkls
Dec  1 04:50:03 np0005540825 systemd[1]: libpod-conmon-8846b8de9998b87adf989c1f59f4803953ef4305dd35003801bc132144fee7e2.scope: Deactivated successfully.
Dec  1 04:50:03 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:50:03 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  1 04:50:03 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  1 04:50:03 np0005540825 python3[89699]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-url http://192.168.122.100:3100#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:50:03 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.client.admin.keyring
Dec  1 04:50:03 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.client.admin.keyring
Dec  1 04:50:03 np0005540825 podman[89733]: 2025-12-01 09:50:03.551755397 +0000 UTC m=+0.112189593 container create 7f66105718dd3442d654ff827721f6ecd710920d6b839862b49d2d26fd90487f (image=quay.io/ceph/ceph:v19, name=elated_fermi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec  1 04:50:03 np0005540825 podman[89733]: 2025-12-01 09:50:03.465826171 +0000 UTC m=+0.026260357 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:50:03 np0005540825 systemd[1]: Started libpod-conmon-7f66105718dd3442d654ff827721f6ecd710920d6b839862b49d2d26fd90487f.scope.
Dec  1 04:50:03 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 7.17 deep-scrub starts
Dec  1 04:50:03 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:50:03 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f39d68a07c3433ca19021c06503fe2a43f020953a3323302e0d7bbba184e812/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:03 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f39d68a07c3433ca19021c06503fe2a43f020953a3323302e0d7bbba184e812/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:03 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f39d68a07c3433ca19021c06503fe2a43f020953a3323302e0d7bbba184e812/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:03 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 7.17 deep-scrub ok
Dec  1 04:50:03 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.client.admin.keyring
Dec  1 04:50:03 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.client.admin.keyring
Dec  1 04:50:03 np0005540825 podman[89733]: 2025-12-01 09:50:03.744594645 +0000 UTC m=+0.305028811 container init 7f66105718dd3442d654ff827721f6ecd710920d6b839862b49d2d26fd90487f (image=quay.io/ceph/ceph:v19, name=elated_fermi, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:50:03 np0005540825 podman[89733]: 2025-12-01 09:50:03.749651869 +0000 UTC m=+0.310086035 container start 7f66105718dd3442d654ff827721f6ecd710920d6b839862b49d2d26fd90487f (image=quay.io/ceph/ceph:v19, name=elated_fermi, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:50:03 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1185161015; not ready for session (expect reconnect)
Dec  1 04:50:03 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  1 04:50:03 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  1 04:50:03 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  1 04:50:03 np0005540825 podman[89733]: 2025-12-01 09:50:03.760770813 +0000 UTC m=+0.321204979 container attach 7f66105718dd3442d654ff827721f6ecd710920d6b839862b49d2d26fd90487f (image=quay.io/ceph/ceph:v19, name=elated_fermi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  1 04:50:04 np0005540825 ceph-mon[74416]: Updating compute-1:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.conf
Dec  1 04:50:04 np0005540825 ceph-mon[74416]: Updating compute-0:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.conf
Dec  1 04:50:04 np0005540825 ceph-mon[74416]: Updating compute-2:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.conf
Dec  1 04:50:04 np0005540825 ceph-mon[74416]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  1 04:50:04 np0005540825 ceph-mon[74416]: from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:04 np0005540825 ceph-mon[74416]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  1 04:50:04 np0005540825 ceph-mon[74416]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  1 04:50:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  1 04:50:04 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  1 04:50:04 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.14358 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 04:50:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Dec  1 04:50:04 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:04 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.client.admin.keyring
Dec  1 04:50:04 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.client.admin.keyring
Dec  1 04:50:04 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:04 np0005540825 elated_fermi[89811]: Option GRAFANA_API_URL updated
Dec  1 04:50:04 np0005540825 systemd[1]: libpod-7f66105718dd3442d654ff827721f6ecd710920d6b839862b49d2d26fd90487f.scope: Deactivated successfully.
Dec  1 04:50:04 np0005540825 conmon[89811]: conmon 7f66105718dd3442d654 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7f66105718dd3442d654ff827721f6ecd710920d6b839862b49d2d26fd90487f.scope/container/memory.events
Dec  1 04:50:04 np0005540825 podman[89733]: 2025-12-01 09:50:04.206586832 +0000 UTC m=+0.767021018 container died 7f66105718dd3442d654ff827721f6ecd710920d6b839862b49d2d26fd90487f (image=quay.io/ceph/ceph:v19, name=elated_fermi, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:50:04 np0005540825 systemd[1]: var-lib-containers-storage-overlay-3f39d68a07c3433ca19021c06503fe2a43f020953a3323302e0d7bbba184e812-merged.mount: Deactivated successfully.
Dec  1 04:50:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:50:04 np0005540825 podman[89733]: 2025-12-01 09:50:04.29221422 +0000 UTC m=+0.852648376 container remove 7f66105718dd3442d654ff827721f6ecd710920d6b839862b49d2d26fd90487f (image=quay.io/ceph/ceph:v19, name=elated_fermi, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:50:04 np0005540825 systemd[1]: libpod-conmon-7f66105718dd3442d654ff827721f6ecd710920d6b839862b49d2d26fd90487f.scope: Deactivated successfully.
Dec  1 04:50:04 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:50:04 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v7: 193 pgs: 164 active+clean, 29 unknown; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec  1 04:50:04 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:04 np0005540825 python3[90123]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:50:04 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 7.15 deep-scrub starts
Dec  1 04:50:04 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 7.15 deep-scrub ok
Dec  1 04:50:04 np0005540825 podman[90124]: 2025-12-01 09:50:04.662040606 +0000 UTC m=+0.058661835 container create 983f238a03c1b885efe88bd4abb3e6ec518c8e193c5e2f8e0250c1db9411d568 (image=quay.io/ceph/ceph:v19, name=zealous_wescoff, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec  1 04:50:04 np0005540825 systemd[1]: Started libpod-conmon-983f238a03c1b885efe88bd4abb3e6ec518c8e193c5e2f8e0250c1db9411d568.scope.
Dec  1 04:50:04 np0005540825 podman[90124]: 2025-12-01 09:50:04.632019111 +0000 UTC m=+0.028640380 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:50:04 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:50:04 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc238f8fc07defe8e2a35dd1eff81641fe23362726d93270acfc0aff382fb9eb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:04 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc238f8fc07defe8e2a35dd1eff81641fe23362726d93270acfc0aff382fb9eb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:04 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc238f8fc07defe8e2a35dd1eff81641fe23362726d93270acfc0aff382fb9eb/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:04 np0005540825 podman[90124]: 2025-12-01 09:50:04.742725273 +0000 UTC m=+0.139346492 container init 983f238a03c1b885efe88bd4abb3e6ec518c8e193c5e2f8e0250c1db9411d568 (image=quay.io/ceph/ceph:v19, name=zealous_wescoff, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:50:04 np0005540825 podman[90124]: 2025-12-01 09:50:04.749571925 +0000 UTC m=+0.146193124 container start 983f238a03c1b885efe88bd4abb3e6ec518c8e193c5e2f8e0250c1db9411d568 (image=quay.io/ceph/ceph:v19, name=zealous_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:50:04 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1185161015; not ready for session (expect reconnect)
Dec  1 04:50:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  1 04:50:04 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  1 04:50:04 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  1 04:50:04 np0005540825 podman[90124]: 2025-12-01 09:50:04.755334707 +0000 UTC m=+0.151955916 container attach 983f238a03c1b885efe88bd4abb3e6ec518c8e193c5e2f8e0250c1db9411d568 (image=quay.io/ceph/ceph:v19, name=zealous_wescoff, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Dec  1 04:50:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  1 04:50:04 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  1 04:50:04 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 04:50:04 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:04 np0005540825 ceph-mgr[74709]: [progress INFO root] update: starting ev 15ca11fc-8854-4709-91d6-e41291b4f816 (Updating node-exporter deployment (+3 -> 3))
Dec  1 04:50:04 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-0 on compute-0
Dec  1 04:50:04 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-0 on compute-0
Dec  1 04:50:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Dec  1 04:50:05 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2440048888' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec  1 04:50:05 np0005540825 ceph-mon[74416]: Updating compute-1:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.client.admin.keyring
Dec  1 04:50:05 np0005540825 ceph-mon[74416]: Updating compute-0:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.client.admin.keyring
Dec  1 04:50:05 np0005540825 ceph-mon[74416]: from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:05 np0005540825 ceph-mon[74416]: from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:05 np0005540825 ceph-mon[74416]: Updating compute-2:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.client.admin.keyring
Dec  1 04:50:05 np0005540825 ceph-mon[74416]: from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:05 np0005540825 ceph-mon[74416]: from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:05 np0005540825 ceph-mon[74416]: from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:05 np0005540825 ceph-mon[74416]: from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:05 np0005540825 ceph-mon[74416]: from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:05 np0005540825 ceph-mon[74416]: from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' 
Dec  1 04:50:05 np0005540825 systemd[1]: Reloading.
Dec  1 04:50:05 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:50:05 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:50:05 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 7.0 deep-scrub starts
Dec  1 04:50:05 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 7.0 deep-scrub ok
Dec  1 04:50:05 np0005540825 ceph-mgr[74709]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1185161015; not ready for session (expect reconnect)
Dec  1 04:50:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  1 04:50:05 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14304 192.168.122.100:0/3931987211' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  1 04:50:05 np0005540825 ceph-mgr[74709]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  1 04:50:05 np0005540825 systemd[1]: Reloading.
Dec  1 04:50:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Dec  1 04:50:05 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:50:05 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:50:06 np0005540825 systemd[1]: Starting Ceph node-exporter.compute-0 for 365f19c2-81e5-5edd-b6b4-280555214d3a...
Dec  1 04:50:06 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v8: 193 pgs: 164 active+clean, 29 unknown; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 27 KiB/s rd, 0 B/s wr, 10 op/s
Dec  1 04:50:06 np0005540825 bash[90377]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Dec  1 04:50:06 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2440048888' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec  1 04:50:06 np0005540825 ceph-mgr[74709]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec  1 04:50:06 np0005540825 ceph-mgr[74709]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec  1 04:50:06 np0005540825 ceph-mgr[74709]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec  1 04:50:06 np0005540825 ceph-mgr[74709]: mgr respawn  1: '-n'
Dec  1 04:50:06 np0005540825 ceph-mgr[74709]: mgr respawn  2: 'mgr.compute-0.fospow'
Dec  1 04:50:06 np0005540825 ceph-mgr[74709]: mgr respawn  3: '-f'
Dec  1 04:50:06 np0005540825 ceph-mgr[74709]: mgr respawn  4: '--setuser'
Dec  1 04:50:06 np0005540825 ceph-mgr[74709]: mgr respawn  5: 'ceph'
Dec  1 04:50:06 np0005540825 ceph-mgr[74709]: mgr respawn  6: '--setgroup'
Dec  1 04:50:06 np0005540825 ceph-mgr[74709]: mgr respawn  7: 'ceph'
Dec  1 04:50:06 np0005540825 ceph-mgr[74709]: mgr respawn  8: '--default-log-to-file=false'
Dec  1 04:50:06 np0005540825 ceph-mgr[74709]: mgr respawn  9: '--default-log-to-journald=true'
Dec  1 04:50:06 np0005540825 ceph-mgr[74709]: mgr respawn  10: '--default-log-to-stderr=false'
Dec  1 04:50:06 np0005540825 ceph-mgr[74709]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec  1 04:50:06 np0005540825 ceph-mgr[74709]: mgr respawn  exe_path /proc/self/exe
Dec  1 04:50:06 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mgrmap e18: compute-0.fospow(active, since 8s), standbys: compute-1.ymizfm, compute-2.kdtkls
Dec  1 04:50:06 np0005540825 systemd[1]: libpod-983f238a03c1b885efe88bd4abb3e6ec518c8e193c5e2f8e0250c1db9411d568.scope: Deactivated successfully.
Dec  1 04:50:06 np0005540825 podman[90124]: 2025-12-01 09:50:06.514208315 +0000 UTC m=+1.910829524 container died 983f238a03c1b885efe88bd4abb3e6ec518c8e193c5e2f8e0250c1db9411d568 (image=quay.io/ceph/ceph:v19, name=zealous_wescoff, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  1 04:50:06 np0005540825 systemd[1]: var-lib-containers-storage-overlay-cc238f8fc07defe8e2a35dd1eff81641fe23362726d93270acfc0aff382fb9eb-merged.mount: Deactivated successfully.
Dec  1 04:50:06 np0005540825 podman[90124]: 2025-12-01 09:50:06.564822626 +0000 UTC m=+1.961443815 container remove 983f238a03c1b885efe88bd4abb3e6ec518c8e193c5e2f8e0250c1db9411d568 (image=quay.io/ceph/ceph:v19, name=zealous_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:50:06 np0005540825 systemd-logind[789]: Session 34 logged out. Waiting for processes to exit.
Dec  1 04:50:06 np0005540825 systemd[1]: libpod-conmon-983f238a03c1b885efe88bd4abb3e6ec518c8e193c5e2f8e0250c1db9411d568.scope: Deactivated successfully.
Dec  1 04:50:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ignoring --setuser ceph since I am not root
Dec  1 04:50:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ignoring --setgroup ceph since I am not root
Dec  1 04:50:06 np0005540825 ceph-mgr[74709]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec  1 04:50:06 np0005540825 ceph-mgr[74709]: pidfile_write: ignore empty --pid-file
Dec  1 04:50:06 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'alerts'
Dec  1 04:50:06 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Dec  1 04:50:06 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Dec  1 04:50:06 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Dec  1 04:50:06 np0005540825 ceph-mon[74416]: OSD bench result of 4501.924530 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  1 04:50:06 np0005540825 ceph-mon[74416]: Deploying daemon node-exporter.compute-0 on compute-0
Dec  1 04:50:06 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/2440048888' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec  1 04:50:06 np0005540825 ceph-mon[74416]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/1185161015,v1:192.168.122.102:6801/1185161015] boot
Dec  1 04:50:06 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Dec  1 04:50:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:06.727+0000 7facad871140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  1 04:50:06 np0005540825 ceph-mgr[74709]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  1 04:50:06 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'balancer'
Dec  1 04:50:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:06.809+0000 7facad871140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  1 04:50:06 np0005540825 ceph-mgr[74709]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  1 04:50:06 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'cephadm'
Dec  1 04:50:06 np0005540825 bash[90377]: Getting image source signatures
Dec  1 04:50:06 np0005540825 bash[90377]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Dec  1 04:50:06 np0005540825 bash[90377]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Dec  1 04:50:06 np0005540825 bash[90377]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Dec  1 04:50:06 np0005540825 python3[90447]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:50:07 np0005540825 podman[90453]: 2025-12-01 09:50:07.033278094 +0000 UTC m=+0.049123062 container create cb5d76f884e7f3486c423d5df912b3a0e8812244019eb8fd77a433ba5f2b56ad (image=quay.io/ceph/ceph:v19, name=modest_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:50:07 np0005540825 systemd[1]: Started libpod-conmon-cb5d76f884e7f3486c423d5df912b3a0e8812244019eb8fd77a433ba5f2b56ad.scope.
Dec  1 04:50:07 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:50:07 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/472684a31330cca2b4179c2eab7019be3c1a481a9accd52db596a58f883f339b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:07 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/472684a31330cca2b4179c2eab7019be3c1a481a9accd52db596a58f883f339b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:07 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/472684a31330cca2b4179c2eab7019be3c1a481a9accd52db596a58f883f339b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:07 np0005540825 podman[90453]: 2025-12-01 09:50:07.013738607 +0000 UTC m=+0.029583585 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:50:07 np0005540825 podman[90453]: 2025-12-01 09:50:07.120122155 +0000 UTC m=+0.135967133 container init cb5d76f884e7f3486c423d5df912b3a0e8812244019eb8fd77a433ba5f2b56ad (image=quay.io/ceph/ceph:v19, name=modest_almeida, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:50:07 np0005540825 podman[90453]: 2025-12-01 09:50:07.129593006 +0000 UTC m=+0.145437964 container start cb5d76f884e7f3486c423d5df912b3a0e8812244019eb8fd77a433ba5f2b56ad (image=quay.io/ceph/ceph:v19, name=modest_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[6.1e( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=2.028895855s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.603179932s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[7.1f( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=3.629947662s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.204299927s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[6.1e( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=2.028821707s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.603179932s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[7.1f( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=3.629913807s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.204299927s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[4.1f( empty local-lis/les=32/33 n=0 ec=30/18 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.584454775s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 102.159027100s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[6.1c( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=2.033522844s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.608108521s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[6.1c( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=2.033505201s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.608108521s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:50:07 np0005540825 podman[90453]: 2025-12-01 09:50:07.134800674 +0000 UTC m=+0.150645632 container attach cb5d76f884e7f3486c423d5df912b3a0e8812244019eb8fd77a433ba5f2b56ad (image=quay.io/ceph/ceph:v19, name=modest_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[2.18( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=40 pruub=4.428821087s) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 104.003723145s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[4.1f( empty local-lis/les=32/33 n=0 ec=30/18 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.584443808s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 102.159027100s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[6.12( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=2.032088280s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.607040405s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[6.12( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=2.032074690s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.607040405s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[2.18( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=40 pruub=4.428696632s) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 104.003723145s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[3.15( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.587640524s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 102.162765503s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[7.11( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=3.633678675s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.208847046s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[3.15( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.587602377s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 102.162765503s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[7.11( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=3.633665562s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.208847046s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[6.17( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=2.031831026s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.607101440s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[7.16( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=3.633588791s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.208869934s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[6.17( empty local-lis/les=35/36 n=0 ec=32/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=2.031819344s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.607101440s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[7.16( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=3.633577824s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.208869934s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[4.15( empty local-lis/les=32/33 n=0 ec=30/18 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.587339163s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 102.162734985s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[4.15( empty local-lis/les=32/33 n=0 ec=30/18 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.587324858s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 102.162734985s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[2.12( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=40 pruub=4.428314686s) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 104.003753662s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[2.12( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=40 pruub=4.428303242s) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 104.003753662s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[3.11( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.587343693s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 102.162910461s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[3.11( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.587333679s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 102.162910461s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[2.f( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=40 pruub=4.427913666s) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 104.003501892s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[2.f( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=40 pruub=4.427898407s) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 104.003501892s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[4.9( empty local-lis/les=32/33 n=0 ec=30/18 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.587259293s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 102.162918091s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[4.9( empty local-lis/les=32/33 n=0 ec=30/18 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.587243557s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 102.162918091s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[3.e( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.587215185s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 102.162918091s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[3.e( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.587205410s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 102.162918091s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[2.b( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=40 pruub=4.427509308s) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 104.003463745s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[4.8( empty local-lis/les=32/33 n=0 ec=30/18 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.587039709s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 102.163063049s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[2.b( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=40 pruub=4.427422523s) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 104.003463745s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[5.4( empty local-lis/les=35/36 n=0 ec=32/19 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=2.031960487s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.608024597s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[5.4( empty local-lis/les=35/36 n=0 ec=32/19 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=2.031943083s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.608024597s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[4.8( empty local-lis/les=32/33 n=0 ec=30/18 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.586919069s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 102.163063049s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[7.5( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=3.632936716s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.209121704s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[2.5( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=40 pruub=4.427200317s) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 104.003417969s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[2.5( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=40 pruub=4.427188873s) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 104.003417969s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[7.5( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=3.632902622s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.209121704s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[4.1( empty local-lis/les=32/33 n=0 ec=30/18 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.587101936s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 102.163536072s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[4.1( empty local-lis/les=32/33 n=0 ec=30/18 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.587087631s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 102.163536072s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[3.9( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.587045431s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 102.163536072s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[3.9( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.587031126s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 102.163536072s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[5.e( empty local-lis/les=35/36 n=0 ec=32/19 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=2.032104254s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.608665466s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[5.e( empty local-lis/les=35/36 n=0 ec=32/19 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=2.032089233s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.608665466s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[3.1a( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.586957932s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 102.163612366s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[3.1a( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.586945295s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 102.163612366s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[3.1d( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.586908102s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 102.163658142s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[3.1d( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.586894274s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 102.163658142s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[2.1c( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=40 pruub=4.421436787s) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.998229980s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[2.1c( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=40 pruub=4.421422482s) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.998229980s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[5.1a( empty local-lis/les=35/36 n=0 ec=32/19 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=2.031965256s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.608818054s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[5.1a( empty local-lis/les=35/36 n=0 ec=32/19 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=2.031949997s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.608818054s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[2.1d( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=40 pruub=4.421265125s) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.998245239s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 40 pg[2.1d( empty local-lis/les=28/30 n=0 ec=28/14 lis/c=28/28 les/c/f=30/30/0 sis=40 pruub=4.421247005s) [2] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.998245239s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:50:07 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Dec  1 04:50:07 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/521759544' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec  1 04:50:07 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'crash'
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Dec  1 04:50:07 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Dec  1 04:50:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:07.653+0000 7facad871140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  1 04:50:07 np0005540825 ceph-mgr[74709]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  1 04:50:07 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'dashboard'
Dec  1 04:50:07 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Dec  1 04:50:07 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/2440048888' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec  1 04:50:07 np0005540825 ceph-mon[74416]: osd.2 [v2:192.168.122.102:6800/1185161015,v1:192.168.122.102:6801/1185161015] boot
Dec  1 04:50:07 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/521759544' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec  1 04:50:07 np0005540825 bash[90377]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Dec  1 04:50:07 np0005540825 bash[90377]: Writing manifest to image destination
Dec  1 04:50:07 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Dec  1 04:50:07 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/521759544' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec  1 04:50:07 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mgrmap e19: compute-0.fospow(active, since 9s), standbys: compute-1.ymizfm, compute-2.kdtkls
Dec  1 04:50:07 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Dec  1 04:50:07 np0005540825 systemd[1]: libpod-cb5d76f884e7f3486c423d5df912b3a0e8812244019eb8fd77a433ba5f2b56ad.scope: Deactivated successfully.
Dec  1 04:50:07 np0005540825 conmon[90472]: conmon cb5d76f884e7f3486c42 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cb5d76f884e7f3486c423d5df912b3a0e8812244019eb8fd77a433ba5f2b56ad.scope/container/memory.events
Dec  1 04:50:07 np0005540825 podman[90453]: 2025-12-01 09:50:07.960155696 +0000 UTC m=+0.976000684 container died cb5d76f884e7f3486c423d5df912b3a0e8812244019eb8fd77a433ba5f2b56ad (image=quay.io/ceph/ceph:v19, name=modest_almeida, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  1 04:50:07 np0005540825 systemd[1]: var-lib-containers-storage-overlay-472684a31330cca2b4179c2eab7019be3c1a481a9accd52db596a58f883f339b-merged.mount: Deactivated successfully.
Dec  1 04:50:07 np0005540825 podman[90377]: 2025-12-01 09:50:07.999249171 +0000 UTC m=+1.679363484 container create cd3077bd2d5a007c3a726828ac7eae9ffbb7d553deec632ef7494e1db8acac45 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:50:08 np0005540825 podman[90453]: 2025-12-01 09:50:08.01846951 +0000 UTC m=+1.034314468 container remove cb5d76f884e7f3486c423d5df912b3a0e8812244019eb8fd77a433ba5f2b56ad (image=quay.io/ceph/ceph:v19, name=modest_almeida, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  1 04:50:08 np0005540825 systemd[1]: libpod-conmon-cb5d76f884e7f3486c423d5df912b3a0e8812244019eb8fd77a433ba5f2b56ad.scope: Deactivated successfully.
Dec  1 04:50:08 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cffbe57ed3d8c2bbb0a46291afd0f52d8bf60d8bc0b99b2b31c1db3ee4744b8/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:08 np0005540825 podman[90377]: 2025-12-01 09:50:07.959475857 +0000 UTC m=+1.639590270 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Dec  1 04:50:08 np0005540825 podman[90377]: 2025-12-01 09:50:08.054825513 +0000 UTC m=+1.734939846 container init cd3077bd2d5a007c3a726828ac7eae9ffbb7d553deec632ef7494e1db8acac45 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:50:08 np0005540825 podman[90377]: 2025-12-01 09:50:08.063188855 +0000 UTC m=+1.743303168 container start cd3077bd2d5a007c3a726828ac7eae9ffbb7d553deec632ef7494e1db8acac45 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:50:08 np0005540825 bash[90377]: cd3077bd2d5a007c3a726828ac7eae9ffbb7d553deec632ef7494e1db8acac45
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.068Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.068Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.069Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.069Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.069Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=arp
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=bcache
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=bonding
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=btrfs
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=conntrack
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=cpu
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=diskstats
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=dmi
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=edac
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=entropy
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=filefd
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=filesystem
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=hwmon
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=infiniband
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=ipvs
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=loadavg
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=mdadm
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=meminfo
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=netclass
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=netdev
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=netstat
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=nfs
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=nfsd
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=nvme
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=os
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=pressure
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=rapl
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=schedstat
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=selinux
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=sockstat
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=softnet
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=stat
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=tapestats
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=textfile
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=thermal_zone
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=time
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=uname
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=vmstat
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=xfs
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.070Z caller=node_exporter.go:117 level=info collector=zfs
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.071Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[90572]: ts=2025-12-01T09:50:08.071Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Dec  1 04:50:08 np0005540825 systemd[1]: Started Ceph node-exporter.compute-0 for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 04:50:08 np0005540825 systemd[1]: session-34.scope: Deactivated successfully.
Dec  1 04:50:08 np0005540825 systemd[1]: session-34.scope: Consumed 5.660s CPU time.
Dec  1 04:50:08 np0005540825 systemd-logind[789]: Removed session 34.
Dec  1 04:50:08 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:50:08 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'devicehealth'
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:08.315+0000 7facad871140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  1 04:50:08 np0005540825 ceph-mgr[74709]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  1 04:50:08 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'diskprediction_local'
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]:  from numpy import show_config as show_numpy_config
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:08.497+0000 7facad871140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  1 04:50:08 np0005540825 ceph-mgr[74709]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  1 04:50:08 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'influx'
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:08.573+0000 7facad871140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  1 04:50:08 np0005540825 ceph-mgr[74709]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  1 04:50:08 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'insights'
Dec  1 04:50:08 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'iostat'
Dec  1 04:50:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:08.708+0000 7facad871140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  1 04:50:08 np0005540825 ceph-mgr[74709]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  1 04:50:08 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'k8sevents'
Dec  1 04:50:08 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 7.d scrub starts
Dec  1 04:50:08 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 7.d scrub ok
Dec  1 04:50:08 np0005540825 python3[90656]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 04:50:08 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/521759544' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec  1 04:50:09 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'localpool'
Dec  1 04:50:09 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'mds_autoscaler'
Dec  1 04:50:09 np0005540825 python3[90727]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764582608.5433364-37405-159696896765565/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:50:09 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'mirroring'
Dec  1 04:50:09 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'nfs'
Dec  1 04:50:09 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 7.c scrub starts
Dec  1 04:50:09 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 7.c scrub ok
Dec  1 04:50:09 np0005540825 python3[90777]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:50:09 np0005540825 podman[90778]: 2025-12-01 09:50:09.741348325 +0000 UTC m=+0.042888817 container create be70e69fd8c2e6178c5cd297960eac0f0bd09de01dac69660cdb7114e105a666 (image=quay.io/ceph/ceph:v19, name=nice_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:50:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:09.745+0000 7facad871140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  1 04:50:09 np0005540825 ceph-mgr[74709]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  1 04:50:09 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'orchestrator'
Dec  1 04:50:09 np0005540825 systemd[1]: Started libpod-conmon-be70e69fd8c2e6178c5cd297960eac0f0bd09de01dac69660cdb7114e105a666.scope.
Dec  1 04:50:09 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:50:09 np0005540825 podman[90778]: 2025-12-01 09:50:09.723174363 +0000 UTC m=+0.024714885 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:50:09 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e53c30e27e459c36f3d948ae623dd176d7d296fcb76be9256e0b350c74d5352b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:09 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e53c30e27e459c36f3d948ae623dd176d7d296fcb76be9256e0b350c74d5352b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:09 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e53c30e27e459c36f3d948ae623dd176d7d296fcb76be9256e0b350c74d5352b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:09 np0005540825 podman[90778]: 2025-12-01 09:50:09.838130818 +0000 UTC m=+0.139671330 container init be70e69fd8c2e6178c5cd297960eac0f0bd09de01dac69660cdb7114e105a666 (image=quay.io/ceph/ceph:v19, name=nice_archimedes, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  1 04:50:09 np0005540825 podman[90778]: 2025-12-01 09:50:09.84422747 +0000 UTC m=+0.145767972 container start be70e69fd8c2e6178c5cd297960eac0f0bd09de01dac69660cdb7114e105a666 (image=quay.io/ceph/ceph:v19, name=nice_archimedes, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  1 04:50:09 np0005540825 podman[90778]: 2025-12-01 09:50:09.847823625 +0000 UTC m=+0.149364147 container attach be70e69fd8c2e6178c5cd297960eac0f0bd09de01dac69660cdb7114e105a666 (image=quay.io/ceph/ceph:v19, name=nice_archimedes, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:50:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:09.985+0000 7facad871140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  1 04:50:09 np0005540825 ceph-mgr[74709]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  1 04:50:09 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'osd_perf_query'
Dec  1 04:50:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:10.056+0000 7facad871140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  1 04:50:10 np0005540825 ceph-mgr[74709]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  1 04:50:10 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'osd_support'
Dec  1 04:50:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:10.120+0000 7facad871140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  1 04:50:10 np0005540825 ceph-mgr[74709]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  1 04:50:10 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'pg_autoscaler'
Dec  1 04:50:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:10.196+0000 7facad871140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  1 04:50:10 np0005540825 ceph-mgr[74709]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  1 04:50:10 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'progress'
Dec  1 04:50:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:10.263+0000 7facad871140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  1 04:50:10 np0005540825 ceph-mgr[74709]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  1 04:50:10 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'prometheus'
Dec  1 04:50:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:10.609+0000 7facad871140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  1 04:50:10 np0005540825 ceph-mgr[74709]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  1 04:50:10 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'rbd_support'
Dec  1 04:50:10 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Dec  1 04:50:10 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Dec  1 04:50:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:10.701+0000 7facad871140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  1 04:50:10 np0005540825 ceph-mgr[74709]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  1 04:50:10 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'restful'
Dec  1 04:50:10 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'rgw'
Dec  1 04:50:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:11.107+0000 7facad871140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  1 04:50:11 np0005540825 ceph-mgr[74709]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  1 04:50:11 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'rook'
Dec  1 04:50:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:11.660+0000 7facad871140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  1 04:50:11 np0005540825 ceph-mgr[74709]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  1 04:50:11 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'selftest'
Dec  1 04:50:11 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 7.1a deep-scrub starts
Dec  1 04:50:11 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 7.1a deep-scrub ok
Dec  1 04:50:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:11.733+0000 7facad871140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  1 04:50:11 np0005540825 ceph-mgr[74709]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  1 04:50:11 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'snap_schedule'
Dec  1 04:50:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:11.808+0000 7facad871140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  1 04:50:11 np0005540825 ceph-mgr[74709]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  1 04:50:11 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'stats'
Dec  1 04:50:11 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'status'
Dec  1 04:50:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:11.951+0000 7facad871140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec  1 04:50:11 np0005540825 ceph-mgr[74709]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec  1 04:50:11 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'telegraf'
Dec  1 04:50:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:12.021+0000 7facad871140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  1 04:50:12 np0005540825 ceph-mgr[74709]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  1 04:50:12 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'telemetry'
Dec  1 04:50:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:12.191+0000 7facad871140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  1 04:50:12 np0005540825 ceph-mgr[74709]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  1 04:50:12 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'test_orchestrator'
Dec  1 04:50:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:12.430+0000 7facad871140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  1 04:50:12 np0005540825 ceph-mgr[74709]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  1 04:50:12 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'volumes'
Dec  1 04:50:12 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Dec  1 04:50:12 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Dec  1 04:50:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:12.711+0000 7facad871140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  1 04:50:12 np0005540825 ceph-mgr[74709]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  1 04:50:12 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'zabbix'
Dec  1 04:50:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:12.787+0000 7facad871140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  1 04:50:12 np0005540825 ceph-mgr[74709]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  1 04:50:12 np0005540825 ceph-mon[74416]: log_channel(cluster) log [INF] : Active manager daemon compute-0.fospow restarted
Dec  1 04:50:12 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Dec  1 04:50:12 np0005540825 ceph-mon[74416]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.fospow
Dec  1 04:50:12 np0005540825 ceph-mgr[74709]: ms_deliver_dispatch: unhandled message 0x5619f59e5860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec  1 04:50:12 np0005540825 ceph-mgr[74709]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec  1 04:50:12 np0005540825 ceph-mgr[74709]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec  1 04:50:12 np0005540825 ceph-mgr[74709]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec  1 04:50:12 np0005540825 ceph-mgr[74709]: mgr respawn  1: '-n'
Dec  1 04:50:12 np0005540825 ceph-mgr[74709]: mgr respawn  2: 'mgr.compute-0.fospow'
Dec  1 04:50:12 np0005540825 ceph-mgr[74709]: mgr respawn  3: '-f'
Dec  1 04:50:12 np0005540825 ceph-mgr[74709]: mgr respawn  4: '--setuser'
Dec  1 04:50:12 np0005540825 ceph-mgr[74709]: mgr respawn  5: 'ceph'
Dec  1 04:50:12 np0005540825 ceph-mgr[74709]: mgr respawn  6: '--setgroup'
Dec  1 04:50:12 np0005540825 ceph-mgr[74709]: mgr respawn  7: 'ceph'
Dec  1 04:50:12 np0005540825 ceph-mgr[74709]: mgr respawn  8: '--default-log-to-file=false'
Dec  1 04:50:12 np0005540825 ceph-mgr[74709]: mgr respawn  9: '--default-log-to-journald=true'
Dec  1 04:50:12 np0005540825 ceph-mgr[74709]: mgr respawn  10: '--default-log-to-stderr=false'
Dec  1 04:50:12 np0005540825 ceph-mgr[74709]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec  1 04:50:12 np0005540825 ceph-mgr[74709]: mgr respawn  exe_path /proc/self/exe
Dec  1 04:50:12 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Dec  1 04:50:12 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Dec  1 04:50:12 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mgrmap e20: compute-0.fospow(active, starting, since 0.0343962s), standbys: compute-1.ymizfm, compute-2.kdtkls
Dec  1 04:50:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ignoring --setuser ceph since I am not root
Dec  1 04:50:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ignoring --setgroup ceph since I am not root
Dec  1 04:50:12 np0005540825 ceph-mgr[74709]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec  1 04:50:12 np0005540825 ceph-mgr[74709]: pidfile_write: ignore empty --pid-file
Dec  1 04:50:12 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'alerts'
Dec  1 04:50:12 np0005540825 ceph-mon[74416]: Active manager daemon compute-0.fospow restarted
Dec  1 04:50:12 np0005540825 ceph-mon[74416]: Activating manager daemon compute-0.fospow
Dec  1 04:50:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:13.013+0000 7f1a713ba140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  1 04:50:13 np0005540825 ceph-mgr[74709]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  1 04:50:13 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'balancer'
Dec  1 04:50:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:13.097+0000 7f1a713ba140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  1 04:50:13 np0005540825 ceph-mgr[74709]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  1 04:50:13 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'cephadm'
Dec  1 04:50:13 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:50:13 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.ymizfm restarted
Dec  1 04:50:13 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.ymizfm started
Dec  1 04:50:13 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Dec  1 04:50:13 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Dec  1 04:50:13 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'crash'
Dec  1 04:50:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:13.901+0000 7f1a713ba140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  1 04:50:13 np0005540825 ceph-mgr[74709]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  1 04:50:13 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'dashboard'
Dec  1 04:50:14 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mgrmap e21: compute-0.fospow(active, starting, since 1.23559s), standbys: compute-1.ymizfm, compute-2.kdtkls
Dec  1 04:50:14 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.kdtkls restarted
Dec  1 04:50:14 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.kdtkls started
Dec  1 04:50:14 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'devicehealth'
Dec  1 04:50:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:14.616+0000 7f1a713ba140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  1 04:50:14 np0005540825 ceph-mgr[74709]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  1 04:50:14 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'diskprediction_local'
Dec  1 04:50:14 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 5.16 deep-scrub starts
Dec  1 04:50:14 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 5.16 deep-scrub ok
Dec  1 04:50:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  1 04:50:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  1 04:50:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]:  from numpy import show_config as show_numpy_config
Dec  1 04:50:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:14.785+0000 7f1a713ba140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  1 04:50:14 np0005540825 ceph-mgr[74709]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  1 04:50:14 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'influx'
Dec  1 04:50:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:14.858+0000 7f1a713ba140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  1 04:50:14 np0005540825 ceph-mgr[74709]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  1 04:50:14 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'insights'
Dec  1 04:50:14 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'iostat'
Dec  1 04:50:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:15.006+0000 7f1a713ba140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  1 04:50:15 np0005540825 ceph-mgr[74709]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  1 04:50:15 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'k8sevents'
Dec  1 04:50:15 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mgrmap e22: compute-0.fospow(active, starting, since 2s), standbys: compute-1.ymizfm, compute-2.kdtkls
Dec  1 04:50:15 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'localpool'
Dec  1 04:50:15 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'mds_autoscaler'
Dec  1 04:50:15 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'mirroring'
Dec  1 04:50:15 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'nfs'
Dec  1 04:50:15 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Dec  1 04:50:15 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Dec  1 04:50:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:15.982+0000 7f1a713ba140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  1 04:50:15 np0005540825 ceph-mgr[74709]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  1 04:50:15 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'orchestrator'
Dec  1 04:50:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:16.191+0000 7f1a713ba140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  1 04:50:16 np0005540825 ceph-mgr[74709]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  1 04:50:16 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'osd_perf_query'
Dec  1 04:50:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:16.271+0000 7f1a713ba140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  1 04:50:16 np0005540825 ceph-mgr[74709]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  1 04:50:16 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'osd_support'
Dec  1 04:50:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:16.342+0000 7f1a713ba140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  1 04:50:16 np0005540825 ceph-mgr[74709]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  1 04:50:16 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'pg_autoscaler'
Dec  1 04:50:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:16.425+0000 7f1a713ba140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  1 04:50:16 np0005540825 ceph-mgr[74709]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  1 04:50:16 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'progress'
Dec  1 04:50:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:16.509+0000 7f1a713ba140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  1 04:50:16 np0005540825 ceph-mgr[74709]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  1 04:50:16 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'prometheus'
Dec  1 04:50:16 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Dec  1 04:50:16 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Dec  1 04:50:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:16.846+0000 7f1a713ba140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  1 04:50:16 np0005540825 ceph-mgr[74709]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  1 04:50:16 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'rbd_support'
Dec  1 04:50:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:16.947+0000 7f1a713ba140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  1 04:50:16 np0005540825 ceph-mgr[74709]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  1 04:50:16 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'restful'
Dec  1 04:50:17 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'rgw'
Dec  1 04:50:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:17.443+0000 7f1a713ba140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  1 04:50:17 np0005540825 ceph-mgr[74709]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  1 04:50:17 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'rook'
Dec  1 04:50:17 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 6.a scrub starts
Dec  1 04:50:17 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 6.a scrub ok
Dec  1 04:50:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:18.056+0000 7f1a713ba140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  1 04:50:18 np0005540825 ceph-mgr[74709]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  1 04:50:18 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'selftest'
Dec  1 04:50:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:18.132+0000 7f1a713ba140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  1 04:50:18 np0005540825 ceph-mgr[74709]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  1 04:50:18 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'snap_schedule'
Dec  1 04:50:18 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:50:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:18.220+0000 7f1a713ba140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  1 04:50:18 np0005540825 ceph-mgr[74709]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  1 04:50:18 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'stats'
Dec  1 04:50:18 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'status'
Dec  1 04:50:18 np0005540825 systemd[1]: Stopping User Manager for UID 42477...
Dec  1 04:50:18 np0005540825 systemd[75739]: Activating special unit Exit the Session...
Dec  1 04:50:18 np0005540825 systemd[75739]: Stopped target Main User Target.
Dec  1 04:50:18 np0005540825 systemd[75739]: Stopped target Basic System.
Dec  1 04:50:18 np0005540825 systemd[75739]: Stopped target Paths.
Dec  1 04:50:18 np0005540825 systemd[75739]: Stopped target Sockets.
Dec  1 04:50:18 np0005540825 systemd[75739]: Stopped target Timers.
Dec  1 04:50:18 np0005540825 systemd[75739]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec  1 04:50:18 np0005540825 systemd[75739]: Stopped Daily Cleanup of User's Temporary Directories.
Dec  1 04:50:18 np0005540825 systemd[75739]: Closed D-Bus User Message Bus Socket.
Dec  1 04:50:18 np0005540825 systemd[75739]: Stopped Create User's Volatile Files and Directories.
Dec  1 04:50:18 np0005540825 systemd[75739]: Removed slice User Application Slice.
Dec  1 04:50:18 np0005540825 systemd[75739]: Reached target Shutdown.
Dec  1 04:50:18 np0005540825 systemd[75739]: Finished Exit the Session.
Dec  1 04:50:18 np0005540825 systemd[75739]: Reached target Exit the Session.
Dec  1 04:50:18 np0005540825 systemd[1]: user@42477.service: Deactivated successfully.
Dec  1 04:50:18 np0005540825 systemd[1]: Stopped User Manager for UID 42477.
Dec  1 04:50:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:18.377+0000 7f1a713ba140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec  1 04:50:18 np0005540825 ceph-mgr[74709]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec  1 04:50:18 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'telegraf'
Dec  1 04:50:18 np0005540825 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Dec  1 04:50:18 np0005540825 systemd[1]: run-user-42477.mount: Deactivated successfully.
Dec  1 04:50:18 np0005540825 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Dec  1 04:50:18 np0005540825 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Dec  1 04:50:18 np0005540825 systemd[1]: Removed slice User Slice of UID 42477.
Dec  1 04:50:18 np0005540825 systemd[1]: user-42477.slice: Consumed 31.429s CPU time.
Dec  1 04:50:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:18.449+0000 7f1a713ba140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  1 04:50:18 np0005540825 ceph-mgr[74709]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  1 04:50:18 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'telemetry'
Dec  1 04:50:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:18.614+0000 7f1a713ba140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  1 04:50:18 np0005540825 ceph-mgr[74709]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  1 04:50:18 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'test_orchestrator'
Dec  1 04:50:18 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Dec  1 04:50:18 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Dec  1 04:50:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:18.859+0000 7f1a713ba140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  1 04:50:18 np0005540825 ceph-mgr[74709]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  1 04:50:18 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'volumes'
Dec  1 04:50:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:19.168+0000 7f1a713ba140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'zabbix'
Dec  1 04:50:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:50:19.251+0000 7f1a713ba140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  1 04:50:19 np0005540825 ceph-mon[74416]: log_channel(cluster) log [INF] : Active manager daemon compute-0.fospow restarted
Dec  1 04:50:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: ms_deliver_dispatch: unhandled message 0x5632f0f85860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec  1 04:50:19 np0005540825 ceph-mon[74416]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.fospow
Dec  1 04:50:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Dec  1 04:50:19 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Dec  1 04:50:19 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mgrmap e23: compute-0.fospow(active, starting, since 0.181684s), standbys: compute-1.ymizfm, compute-2.kdtkls
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: mgr handle_mgr_map Activating!
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: mgr handle_mgr_map I am now activating
Dec  1 04:50:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  1 04:50:19 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  1 04:50:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  1 04:50:19 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  1 04:50:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  1 04:50:19 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  1 04:50:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.fospow", "id": "compute-0.fospow"} v 0)
Dec  1 04:50:19 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mgr metadata", "who": "compute-0.fospow", "id": "compute-0.fospow"}]: dispatch
Dec  1 04:50:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.ymizfm", "id": "compute-1.ymizfm"} v 0)
Dec  1 04:50:19 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mgr metadata", "who": "compute-1.ymizfm", "id": "compute-1.ymizfm"}]: dispatch
Dec  1 04:50:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.kdtkls", "id": "compute-2.kdtkls"} v 0)
Dec  1 04:50:19 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mgr metadata", "who": "compute-2.kdtkls", "id": "compute-2.kdtkls"}]: dispatch
Dec  1 04:50:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  1 04:50:19 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  1 04:50:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  1 04:50:19 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  1 04:50:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  1 04:50:19 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  1 04:50:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec  1 04:50:19 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec  1 04:50:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).mds e1 all = 1
Dec  1 04:50:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec  1 04:50:19 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec  1 04:50:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec  1 04:50:19 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: balancer
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:50:19 np0005540825 ceph-mon[74416]: log_channel(cluster) log [INF] : Manager daemon compute-0.fospow is now available
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [balancer INFO root] Starting
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_09:50:19
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: cephadm
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: crash
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: dashboard
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: devicehealth
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [devicehealth INFO root] Starting
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO access_control] Loading user roles DB version=2
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO sso] Loading SSO DB version=1
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: iostat
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: nfs
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO root] Configured CherryPy, starting engine...
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: orchestrator
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: pg_autoscaler
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: progress
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [progress INFO root] Loading...
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f19f3744b80>, <progress.module.GhostEvent object at 0x7f19f3744df0>, <progress.module.GhostEvent object at 0x7f19f3744e20>, <progress.module.GhostEvent object at 0x7f19f3744e50>, <progress.module.GhostEvent object at 0x7f19f3744e80>, <progress.module.GhostEvent object at 0x7f19f3744eb0>, <progress.module.GhostEvent object at 0x7f19f3744ee0>, <progress.module.GhostEvent object at 0x7f19f3744f10>, <progress.module.GhostEvent object at 0x7f19f3744f40>, <progress.module.GhostEvent object at 0x7f19f3744f70>, <progress.module.GhostEvent object at 0x7f19f3744fa0>, <progress.module.GhostEvent object at 0x7f19f3744fd0>, <progress.module.GhostEvent object at 0x7f19f3758040>] historic events
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [progress INFO root] Loaded OSDMap, ready.
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] recovery thread starting
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] starting setup
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: rbd_support
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: restful
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: status
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: telemetry
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [restful INFO root] server_addr: :: server_port: 8003
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [restful WARNING root] server not running: no certificate configured
Dec  1 04:50:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fospow/mirror_snapshot_schedule"} v 0)
Dec  1 04:50:19 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fospow/mirror_snapshot_schedule"}]: dispatch
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: volumes
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Dec  1 04:50:19 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 6.8 deep-scrub starts
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Dec  1 04:50:19 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 6.8 deep-scrub ok
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Dec  1 04:50:19 np0005540825 ceph-mon[74416]: Active manager daemon compute-0.fospow restarted
Dec  1 04:50:19 np0005540825 ceph-mon[74416]: Activating manager daemon compute-0.fospow
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] PerfHandler: starting
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_task_task: vms, start_after=
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_task_task: volumes, start_after=
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_task_task: backups, start_after=
Dec  1 04:50:19 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.ymizfm restarted
Dec  1 04:50:19 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.ymizfm started
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_task_task: images, start_after=
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TaskHandler: starting
Dec  1 04:50:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fospow/trash_purge_schedule"} v 0)
Dec  1 04:50:19 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fospow/trash_purge_schedule"}]: dispatch
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec  1 04:50:19 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] setup complete
Dec  1 04:50:20 np0005540825 systemd-logind[789]: New session 35 of user ceph-admin.
Dec  1 04:50:20 np0005540825 systemd[1]: Created slice User Slice of UID 42477.
Dec  1 04:50:20 np0005540825 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec  1 04:50:20 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.module] Engine started.
Dec  1 04:50:20 np0005540825 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec  1 04:50:20 np0005540825 systemd[1]: Starting User Manager for UID 42477...
Dec  1 04:50:20 np0005540825 systemd[90983]: Queued start job for default target Main User Target.
Dec  1 04:50:20 np0005540825 systemd[90983]: Created slice User Application Slice.
Dec  1 04:50:20 np0005540825 systemd[90983]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  1 04:50:20 np0005540825 systemd[90983]: Started Daily Cleanup of User's Temporary Directories.
Dec  1 04:50:20 np0005540825 systemd[90983]: Reached target Paths.
Dec  1 04:50:20 np0005540825 systemd[90983]: Reached target Timers.
Dec  1 04:50:20 np0005540825 systemd[90983]: Starting D-Bus User Message Bus Socket...
Dec  1 04:50:20 np0005540825 systemd[90983]: Starting Create User's Volatile Files and Directories...
Dec  1 04:50:20 np0005540825 systemd[90983]: Listening on D-Bus User Message Bus Socket.
Dec  1 04:50:20 np0005540825 systemd[90983]: Reached target Sockets.
Dec  1 04:50:20 np0005540825 systemd[90983]: Finished Create User's Volatile Files and Directories.
Dec  1 04:50:20 np0005540825 systemd[90983]: Reached target Basic System.
Dec  1 04:50:20 np0005540825 systemd[90983]: Reached target Main User Target.
Dec  1 04:50:20 np0005540825 systemd[90983]: Startup finished in 130ms.
Dec  1 04:50:20 np0005540825 systemd[1]: Started User Manager for UID 42477.
Dec  1 04:50:20 np0005540825 systemd[1]: Started Session 35 of User ceph-admin.
Dec  1 04:50:20 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mgrmap e24: compute-0.fospow(active, since 1.43483s), standbys: compute-1.ymizfm, compute-2.kdtkls
Dec  1 04:50:20 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.14388 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 04:50:20 np0005540825 ceph-mgr[74709]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec  1 04:50:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Dec  1 04:50:20 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Dec  1 04:50:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Dec  1 04:50:20 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Dec  1 04:50:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Dec  1 04:50:20 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Dec  1 04:50:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Dec  1 04:50:20 np0005540825 ceph-mon[74416]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec  1 04:50:20 np0005540825 ceph-mon[74416]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec  1 04:50:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mon-compute-0[74412]: 2025-12-01T09:50:20.703+0000 7f9217d0a640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec  1 04:50:20 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Dec  1 04:50:20 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Dec  1 04:50:20 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v3: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  1 04:50:20 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec  1 04:50:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).mds e2 new map
Dec  1 04:50:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).mds e2 print_map#012e2#012btime 2025-12-01T09:50:20:704588+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-01T09:50:20.704523+0000#012modified#0112025-12-01T09:50:20.704523+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 
Dec  1 04:50:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Dec  1 04:50:20 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Dec  1 04:50:20 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Dec  1 04:50:21 np0005540825 ceph-mgr[74709]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec  1 04:50:21 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec  1 04:50:21 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec  1 04:50:21 np0005540825 ceph-mon[74416]: Manager daemon compute-0.fospow is now available
Dec  1 04:50:21 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fospow/mirror_snapshot_schedule"}]: dispatch
Dec  1 04:50:21 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fospow/trash_purge_schedule"}]: dispatch
Dec  1 04:50:21 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Dec  1 04:50:21 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Dec  1 04:50:21 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Dec  1 04:50:21 np0005540825 ceph-mon[74416]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec  1 04:50:21 np0005540825 ceph-mon[74416]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec  1 04:50:21 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:21 np0005540825 ceph-mgr[74709]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec  1 04:50:21 np0005540825 systemd[1]: libpod-be70e69fd8c2e6178c5cd297960eac0f0bd09de01dac69660cdb7114e105a666.scope: Deactivated successfully.
Dec  1 04:50:21 np0005540825 podman[90778]: 2025-12-01 09:50:21.080060881 +0000 UTC m=+11.381601383 container died be70e69fd8c2e6178c5cd297960eac0f0bd09de01dac69660cdb7114e105a666 (image=quay.io/ceph/ceph:v19, name=nice_archimedes, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:50:21 np0005540825 podman[91120]: 2025-12-01 09:50:21.114907614 +0000 UTC m=+0.086505762 container exec 04e54403a63b389bbbec1024baf07fe083822adf8debfd9c927a52a06f70b8a1 (image=quay.io/ceph/ceph:v19, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:50:21 np0005540825 systemd[1]: var-lib-containers-storage-overlay-e53c30e27e459c36f3d948ae623dd176d7d296fcb76be9256e0b350c74d5352b-merged.mount: Deactivated successfully.
Dec  1 04:50:21 np0005540825 podman[90778]: 2025-12-01 09:50:21.133455386 +0000 UTC m=+11.434995878 container remove be70e69fd8c2e6178c5cd297960eac0f0bd09de01dac69660cdb7114e105a666 (image=quay.io/ceph/ceph:v19, name=nice_archimedes, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Dec  1 04:50:21 np0005540825 systemd[1]: libpod-conmon-be70e69fd8c2e6178c5cd297960eac0f0bd09de01dac69660cdb7114e105a666.scope: Deactivated successfully.
Dec  1 04:50:21 np0005540825 podman[91120]: 2025-12-01 09:50:21.220777178 +0000 UTC m=+0.192375296 container exec_died 04e54403a63b389bbbec1024baf07fe083822adf8debfd9c927a52a06f70b8a1 (image=quay.io/ceph/ceph:v19, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mon-compute-0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:50:21 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  1 04:50:21 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:21 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  1 04:50:21 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:21 np0005540825 python3[91206]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:50:21 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v5: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  1 04:50:21 np0005540825 podman[91233]: 2025-12-01 09:50:21.557023135 +0000 UTC m=+0.055535372 container create 88e91f58d3df6fbf55041bc0827c8489ea641078c884cddc664a6949f8108c75 (image=quay.io/ceph/ceph:v19, name=gifted_einstein, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  1 04:50:21 np0005540825 systemd[1]: Started libpod-conmon-88e91f58d3df6fbf55041bc0827c8489ea641078c884cddc664a6949f8108c75.scope.
Dec  1 04:50:21 np0005540825 podman[91233]: 2025-12-01 09:50:21.535464584 +0000 UTC m=+0.033976821 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:50:21 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  1 04:50:21 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:50:21 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d0fb55d4bf39a5ac68fbb1835d114be9d192566b01bb7a361880b3f3a53da15/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:21 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d0fb55d4bf39a5ac68fbb1835d114be9d192566b01bb7a361880b3f3a53da15/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:21 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d0fb55d4bf39a5ac68fbb1835d114be9d192566b01bb7a361880b3f3a53da15/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:21 np0005540825 podman[91233]: 2025-12-01 09:50:21.664489361 +0000 UTC m=+0.163001598 container init 88e91f58d3df6fbf55041bc0827c8489ea641078c884cddc664a6949f8108c75 (image=quay.io/ceph/ceph:v19, name=gifted_einstein, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec  1 04:50:21 np0005540825 podman[91233]: 2025-12-01 09:50:21.671808105 +0000 UTC m=+0.170320332 container start 88e91f58d3df6fbf55041bc0827c8489ea641078c884cddc664a6949f8108c75 (image=quay.io/ceph/ceph:v19, name=gifted_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:50:21 np0005540825 podman[91233]: 2025-12-01 09:50:21.675268907 +0000 UTC m=+0.173781294 container attach 88e91f58d3df6fbf55041bc0827c8489ea641078c884cddc664a6949f8108c75 (image=quay.io/ceph/ceph:v19, name=gifted_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default)
Dec  1 04:50:21 np0005540825 ceph-mgr[74709]: [cephadm INFO cherrypy.error] [01/Dec/2025:09:50:21] ENGINE Bus STARTING
Dec  1 04:50:21 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : [01/Dec/2025:09:50:21] ENGINE Bus STARTING
Dec  1 04:50:21 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:21 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  1 04:50:21 np0005540825 ceph-mgr[74709]: [devicehealth INFO root] Check health
Dec  1 04:50:21 np0005540825 podman[91300]: 2025-12-01 09:50:21.766976283 +0000 UTC m=+0.062795351 container exec cd3077bd2d5a007c3a726828ac7eae9ffbb7d553deec632ef7494e1db8acac45 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:50:21 np0005540825 podman[91300]: 2025-12-01 09:50:21.801997737 +0000 UTC m=+0.097816815 container exec_died cd3077bd2d5a007c3a726828ac7eae9ffbb7d553deec632ef7494e1db8acac45 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:50:21 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Dec  1 04:50:21 np0005540825 ceph-mgr[74709]: [cephadm INFO cherrypy.error] [01/Dec/2025:09:50:21] ENGINE Serving on https://192.168.122.100:7150
Dec  1 04:50:21 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : [01/Dec/2025:09:50:21] ENGINE Serving on https://192.168.122.100:7150
Dec  1 04:50:21 np0005540825 ceph-mgr[74709]: [cephadm INFO cherrypy.error] [01/Dec/2025:09:50:21] ENGINE Client ('192.168.122.100', 56006) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  1 04:50:21 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : [01/Dec/2025:09:50:21] ENGINE Client ('192.168.122.100', 56006) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  1 04:50:21 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Dec  1 04:50:21 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:50:21 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.kdtkls restarted
Dec  1 04:50:21 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.kdtkls started
Dec  1 04:50:21 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:21 np0005540825 ceph-mgr[74709]: [cephadm INFO cherrypy.error] [01/Dec/2025:09:50:21] ENGINE Serving on http://192.168.122.100:8765
Dec  1 04:50:21 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : [01/Dec/2025:09:50:21] ENGINE Serving on http://192.168.122.100:8765
Dec  1 04:50:21 np0005540825 ceph-mgr[74709]: [cephadm INFO cherrypy.error] [01/Dec/2025:09:50:21] ENGINE Bus STARTED
Dec  1 04:50:21 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : [01/Dec/2025:09:50:21] ENGINE Bus STARTED
Dec  1 04:50:21 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:21 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:50:21 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:22 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec  1 04:50:22 np0005540825 ceph-mon[74416]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec  1 04:50:22 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:22 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:22 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:22 np0005540825 ceph-mon[74416]: [01/Dec/2025:09:50:21] ENGINE Bus STARTING
Dec  1 04:50:22 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:22 np0005540825 ceph-mon[74416]: [01/Dec/2025:09:50:21] ENGINE Serving on https://192.168.122.100:7150
Dec  1 04:50:22 np0005540825 ceph-mon[74416]: [01/Dec/2025:09:50:21] ENGINE Client ('192.168.122.100', 56006) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  1 04:50:22 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:22 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:22 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:22 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.14424 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 04:50:22 np0005540825 ceph-mgr[74709]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec  1 04:50:22 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec  1 04:50:22 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec  1 04:50:22 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mgrmap e25: compute-0.fospow(active, since 3s), standbys: compute-1.ymizfm, compute-2.kdtkls
Dec  1 04:50:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:22 np0005540825 gifted_einstein[91270]: Scheduled mds.cephfs update...
Dec  1 04:50:22 np0005540825 systemd[1]: libpod-88e91f58d3df6fbf55041bc0827c8489ea641078c884cddc664a6949f8108c75.scope: Deactivated successfully.
Dec  1 04:50:22 np0005540825 podman[91233]: 2025-12-01 09:50:22.465003868 +0000 UTC m=+0.963516085 container died 88e91f58d3df6fbf55041bc0827c8489ea641078c884cddc664a6949f8108c75 (image=quay.io/ceph/ceph:v19, name=gifted_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:50:22 np0005540825 systemd[1]: var-lib-containers-storage-overlay-3d0fb55d4bf39a5ac68fbb1835d114be9d192566b01bb7a361880b3f3a53da15-merged.mount: Deactivated successfully.
Dec  1 04:50:22 np0005540825 podman[91233]: 2025-12-01 09:50:22.500909176 +0000 UTC m=+0.999421403 container remove 88e91f58d3df6fbf55041bc0827c8489ea641078c884cddc664a6949f8108c75 (image=quay.io/ceph/ceph:v19, name=gifted_einstein, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  1 04:50:22 np0005540825 systemd[1]: libpod-conmon-88e91f58d3df6fbf55041bc0827c8489ea641078c884cddc664a6949f8108c75.scope: Deactivated successfully.
Dec  1 04:50:22 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  1 04:50:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:22 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  1 04:50:22 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Dec  1 04:50:22 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Dec  1 04:50:22 np0005540825 python3[91503]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   nfs cluster create cephfs --ingress --virtual-ip=192.168.122.2/24 --ingress-mode=haproxy-protocol '--placement=compute-0 compute-1 compute-2 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:50:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:22 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Dec  1 04:50:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec  1 04:50:22 np0005540825 ceph-mgr[74709]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to 128.0M
Dec  1 04:50:22 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to 128.0M
Dec  1 04:50:22 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec  1 04:50:22 np0005540825 ceph-mgr[74709]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-1 to 134220595: error parsing value: Value '134220595' is below minimum 939524096
Dec  1 04:50:22 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-1 to 134220595: error parsing value: Value '134220595' is below minimum 939524096
Dec  1 04:50:22 np0005540825 podman[91530]: 2025-12-01 09:50:22.905356667 +0000 UTC m=+0.071310874 container create 19b014675fe6e16f40ab4c5d7ea0b7deb815b0681c1bd5175600e246fff99e19 (image=quay.io/ceph/ceph:v19, name=laughing_kare, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:50:22 np0005540825 systemd[1]: Started libpod-conmon-19b014675fe6e16f40ab4c5d7ea0b7deb815b0681c1bd5175600e246fff99e19.scope.
Dec  1 04:50:22 np0005540825 podman[91530]: 2025-12-01 09:50:22.873709091 +0000 UTC m=+0.039663328 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:50:22 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:50:22 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3bdf4db54e160e7a1a0d6053973e91c0f058a771010bcc4a114aa92ab5261e3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:22 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3bdf4db54e160e7a1a0d6053973e91c0f058a771010bcc4a114aa92ab5261e3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:22 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3bdf4db54e160e7a1a0d6053973e91c0f058a771010bcc4a114aa92ab5261e3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:22 np0005540825 podman[91530]: 2025-12-01 09:50:22.992765825 +0000 UTC m=+0.158720072 container init 19b014675fe6e16f40ab4c5d7ea0b7deb815b0681c1bd5175600e246fff99e19 (image=quay.io/ceph/ceph:v19, name=laughing_kare, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  1 04:50:23 np0005540825 podman[91530]: 2025-12-01 09:50:23.006532568 +0000 UTC m=+0.172486795 container start 19b014675fe6e16f40ab4c5d7ea0b7deb815b0681c1bd5175600e246fff99e19 (image=quay.io/ceph/ceph:v19, name=laughing_kare, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:50:23 np0005540825 podman[91530]: 2025-12-01 09:50:23.024154864 +0000 UTC m=+0.190109121 container attach 19b014675fe6e16f40ab4c5d7ea0b7deb815b0681c1bd5175600e246fff99e19 (image=quay.io/ceph/ceph:v19, name=laughing_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  1 04:50:23 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  1 04:50:23 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:23 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  1 04:50:23 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:23 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Dec  1 04:50:23 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec  1 04:50:23 np0005540825 ceph-mgr[74709]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 127.9M
Dec  1 04:50:23 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 127.9M
Dec  1 04:50:23 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec  1 04:50:23 np0005540825 ceph-mgr[74709]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Dec  1 04:50:23 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Dec  1 04:50:23 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:50:23 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:50:23 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:23 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:50:23 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:23 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec  1 04:50:23 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  1 04:50:23 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:50:23 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:50:23 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 04:50:23 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 04:50:23 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec  1 04:50:23 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec  1 04:50:23 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec  1 04:50:23 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec  1 04:50:23 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec  1 04:50:23 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec  1 04:50:23 np0005540825 ceph-mon[74416]: [01/Dec/2025:09:50:21] ENGINE Serving on http://192.168.122.100:8765
Dec  1 04:50:23 np0005540825 ceph-mon[74416]: [01/Dec/2025:09:50:21] ENGINE Bus STARTED
Dec  1 04:50:23 np0005540825 ceph-mon[74416]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec  1 04:50:23 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:23 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:23 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:23 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec  1 04:50:23 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:23 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:23 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec  1 04:50:23 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:23 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.14430 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 04:50:23 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true} v 0)
Dec  1 04:50:23 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Dec  1 04:50:23 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v6: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  1 04:50:23 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Dec  1 04:50:23 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Dec  1 04:50:23 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.conf
Dec  1 04:50:23 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.conf
Dec  1 04:50:23 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.conf
Dec  1 04:50:23 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.conf
Dec  1 04:50:24 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mgrmap e26: compute-0.fospow(active, since 4s), standbys: compute-1.ymizfm, compute-2.kdtkls
Dec  1 04:50:24 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.conf
Dec  1 04:50:24 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.conf
Dec  1 04:50:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Dec  1 04:50:24 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  1 04:50:24 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  1 04:50:24 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  1 04:50:24 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  1 04:50:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Dec  1 04:50:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Dec  1 04:50:24 np0005540825 ceph-mon[74416]: Adjusting osd_memory_target on compute-1 to 128.0M
Dec  1 04:50:24 np0005540825 ceph-mon[74416]: Unable to set osd_memory_target on compute-1 to 134220595: error parsing value: Value '134220595' is below minimum 939524096
Dec  1 04:50:24 np0005540825 ceph-mon[74416]: Adjusting osd_memory_target on compute-2 to 127.9M
Dec  1 04:50:24 np0005540825 ceph-mon[74416]: Unable to set osd_memory_target on compute-2 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Dec  1 04:50:24 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:24 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  1 04:50:24 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 04:50:24 np0005540825 ceph-mon[74416]: Updating compute-0:/etc/ceph/ceph.conf
Dec  1 04:50:24 np0005540825 ceph-mon[74416]: Updating compute-1:/etc/ceph/ceph.conf
Dec  1 04:50:24 np0005540825 ceph-mon[74416]: Updating compute-2:/etc/ceph/ceph.conf
Dec  1 04:50:24 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Dec  1 04:50:24 np0005540825 ceph-mon[74416]: Updating compute-1:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.conf
Dec  1 04:50:24 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Dec  1 04:50:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"} v 0)
Dec  1 04:50:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Dec  1 04:50:24 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 5.1f deep-scrub starts
Dec  1 04:50:24 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 5.1f deep-scrub ok
Dec  1 04:50:24 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  1 04:50:24 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  1 04:50:24 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.client.admin.keyring
Dec  1 04:50:24 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.client.admin.keyring
Dec  1 04:50:25 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.client.admin.keyring
Dec  1 04:50:25 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.client.admin.keyring
Dec  1 04:50:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Dec  1 04:50:25 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v8: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  1 04:50:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  1 04:50:25 np0005540825 ceph-mon[74416]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  1 04:50:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  1 04:50:25 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.client.admin.keyring
Dec  1 04:50:25 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.client.admin.keyring
Dec  1 04:50:25 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 5.f scrub starts
Dec  1 04:50:25 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 5.f scrub ok
Dec  1 04:50:25 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Dec  1 04:50:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Dec  1 04:50:25 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Dec  1 04:50:25 np0005540825 ceph-mgr[74709]: [nfs INFO nfs.cluster] Created empty object:conf-nfs.cephfs
Dec  1 04:50:25 np0005540825 ceph-mgr[74709]: [cephadm INFO root] Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec  1 04:50:25 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec  1 04:50:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 04:50:26 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:50:26 np0005540825 ceph-mon[74416]: Updating compute-2:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.conf
Dec  1 04:50:26 np0005540825 ceph-mon[74416]: Updating compute-0:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.conf
Dec  1 04:50:26 np0005540825 ceph-mon[74416]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  1 04:50:26 np0005540825 ceph-mon[74416]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  1 04:50:26 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Dec  1 04:50:26 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Dec  1 04:50:26 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:26 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  1 04:50:26 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Dec  1 04:50:26 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Dec  1 04:50:26 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Dec  1 04:50:27 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v10: 194 pgs: 1 creating+peering, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 11 op/s
Dec  1 04:50:27 np0005540825 ceph-mon[74416]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec  1 04:50:27 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 5.18 deep-scrub starts
Dec  1 04:50:27 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 5.18 deep-scrub ok
Dec  1 04:50:28 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Dec  1 04:50:28 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Dec  1 04:50:29 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v11: 194 pgs: 1 creating+peering, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 8 op/s
Dec  1 04:50:29 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Dec  1 04:50:29 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Dec  1 04:50:30 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Dec  1 04:50:30 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Dec  1 04:50:31 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v12: 194 pgs: 194 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 9 op/s
Dec  1 04:50:31 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Dec  1 04:50:31 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Dec  1 04:50:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  1 04:50:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Dec  1 04:50:32 np0005540825 ceph-mon[74416]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  1 04:50:32 np0005540825 ceph-mon[74416]: Updating compute-1:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.client.admin.keyring
Dec  1 04:50:32 np0005540825 ceph-mon[74416]: Updating compute-2:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.client.admin.keyring
Dec  1 04:50:32 np0005540825 ceph-mon[74416]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  1 04:50:32 np0005540825 ceph-mon[74416]: Updating compute-0:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.client.admin.keyring
Dec  1 04:50:32 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Dec  1 04:50:32 np0005540825 ceph-mon[74416]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec  1 04:50:32 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:32 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Dec  1 04:50:32 np0005540825 ceph-mgr[74709]: [cephadm INFO root] Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec  1 04:50:32 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec  1 04:50:32 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mgrmap e27: compute-0.fospow(active, since 12s), standbys: compute-1.ymizfm, compute-2.kdtkls
Dec  1 04:50:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec  1 04:50:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:50:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:32 np0005540825 systemd[1]: libpod-19b014675fe6e16f40ab4c5d7ea0b7deb815b0681c1bd5175600e246fff99e19.scope: Deactivated successfully.
Dec  1 04:50:32 np0005540825 podman[91530]: 2025-12-01 09:50:32.271148616 +0000 UTC m=+9.437102843 container died 19b014675fe6e16f40ab4c5d7ea0b7deb815b0681c1bd5175600e246fff99e19 (image=quay.io/ceph/ceph:v19, name=laughing_kare, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  1 04:50:32 np0005540825 systemd[1]: var-lib-containers-storage-overlay-f3bdf4db54e160e7a1a0d6053973e91c0f058a771010bcc4a114aa92ab5261e3-merged.mount: Deactivated successfully.
Dec  1 04:50:32 np0005540825 podman[91530]: 2025-12-01 09:50:32.320033267 +0000 UTC m=+9.485987494 container remove 19b014675fe6e16f40ab4c5d7ea0b7deb815b0681c1bd5175600e246fff99e19 (image=quay.io/ceph/ceph:v19, name=laughing_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  1 04:50:32 np0005540825 systemd[1]: libpod-conmon-19b014675fe6e16f40ab4c5d7ea0b7deb815b0681c1bd5175600e246fff99e19.scope: Deactivated successfully.
Dec  1 04:50:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 04:50:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:32 np0005540825 ceph-mgr[74709]: [progress INFO root] update: starting ev 88adaad5-5c81-4215-9412-c607f656fe7e (Updating node-exporter deployment (+2 -> 3))
Dec  1 04:50:32 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-1 on compute-1
Dec  1 04:50:32 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-1 on compute-1
Dec  1 04:50:32 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 6.e scrub starts
Dec  1 04:50:32 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 6.e scrub ok
Dec  1 04:50:33 np0005540825 python3[92603]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 04:50:33 np0005540825 ceph-mon[74416]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec  1 04:50:33 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:33 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:33 np0005540825 ceph-mon[74416]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec  1 04:50:33 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:33 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:33 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:33 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:33 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:33 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:33 np0005540825 ceph-mon[74416]: Deploying daemon node-exporter.compute-1 on compute-1
Dec  1 04:50:33 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:50:33 np0005540825 python3[92676]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764582632.7426002-37437-215381283304454/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=5a16a5bd4a7ebcbad903a4d80924389de6535d80 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:50:33 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v14: 194 pgs: 2 active+clean+scrubbing, 192 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 9 op/s
Dec  1 04:50:33 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 6.d scrub starts
Dec  1 04:50:33 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 6.d scrub ok
Dec  1 04:50:33 np0005540825 python3[92726]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:50:33 np0005540825 podman[92727]: 2025-12-01 09:50:33.931559087 +0000 UTC m=+0.045752570 container create 29d0e0a7a67eef7508b294fd7866d10c2bf9363b4a055650932d6985ceaff930 (image=quay.io/ceph/ceph:v19, name=lucid_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec  1 04:50:33 np0005540825 systemd[1]: Started libpod-conmon-29d0e0a7a67eef7508b294fd7866d10c2bf9363b4a055650932d6985ceaff930.scope.
Dec  1 04:50:34 np0005540825 podman[92727]: 2025-12-01 09:50:33.910418058 +0000 UTC m=+0.024611571 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:50:34 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:50:34 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b7ee3156fc5c400193d6d41715a90e2234734479daac9d48a669e57d07de7f0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:34 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b7ee3156fc5c400193d6d41715a90e2234734479daac9d48a669e57d07de7f0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:34 np0005540825 podman[92727]: 2025-12-01 09:50:34.032970805 +0000 UTC m=+0.147164338 container init 29d0e0a7a67eef7508b294fd7866d10c2bf9363b4a055650932d6985ceaff930 (image=quay.io/ceph/ceph:v19, name=lucid_galois, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:50:34 np0005540825 podman[92727]: 2025-12-01 09:50:34.039514128 +0000 UTC m=+0.153707611 container start 29d0e0a7a67eef7508b294fd7866d10c2bf9363b4a055650932d6985ceaff930 (image=quay.io/ceph/ceph:v19, name=lucid_galois, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec  1 04:50:34 np0005540825 podman[92727]: 2025-12-01 09:50:34.042910947 +0000 UTC m=+0.157104440 container attach 29d0e0a7a67eef7508b294fd7866d10c2bf9363b4a055650932d6985ceaff930 (image=quay.io/ceph/ceph:v19, name=lucid_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True)
Dec  1 04:50:34 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth import"} v 0)
Dec  1 04:50:34 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1169522764' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Dec  1 04:50:34 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1169522764' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec  1 04:50:34 np0005540825 systemd[1]: libpod-29d0e0a7a67eef7508b294fd7866d10c2bf9363b4a055650932d6985ceaff930.scope: Deactivated successfully.
Dec  1 04:50:34 np0005540825 podman[92727]: 2025-12-01 09:50:34.528383789 +0000 UTC m=+0.642577272 container died 29d0e0a7a67eef7508b294fd7866d10c2bf9363b4a055650932d6985ceaff930 (image=quay.io/ceph/ceph:v19, name=lucid_galois, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec  1 04:50:34 np0005540825 systemd[1]: var-lib-containers-storage-overlay-6b7ee3156fc5c400193d6d41715a90e2234734479daac9d48a669e57d07de7f0-merged.mount: Deactivated successfully.
Dec  1 04:50:34 np0005540825 podman[92727]: 2025-12-01 09:50:34.564504483 +0000 UTC m=+0.678697966 container remove 29d0e0a7a67eef7508b294fd7866d10c2bf9363b4a055650932d6985ceaff930 (image=quay.io/ceph/ceph:v19, name=lucid_galois, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  1 04:50:34 np0005540825 systemd[1]: libpod-conmon-29d0e0a7a67eef7508b294fd7866d10c2bf9363b4a055650932d6985ceaff930.scope: Deactivated successfully.
Dec  1 04:50:34 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 6.19 deep-scrub starts
Dec  1 04:50:34 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 6.19 deep-scrub ok
Dec  1 04:50:34 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  1 04:50:34 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:34 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  1 04:50:34 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:34 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec  1 04:50:34 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:34 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-2 on compute-2
Dec  1 04:50:34 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-2 on compute-2
Dec  1 04:50:35 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/1169522764' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Dec  1 04:50:35 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/1169522764' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec  1 04:50:35 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:35 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:35 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:35 np0005540825 ceph-mon[74416]: Deploying daemon node-exporter.compute-2 on compute-2
Dec  1 04:50:35 np0005540825 python3[92805]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:50:35 np0005540825 podman[92807]: 2025-12-01 09:50:35.467503531 +0000 UTC m=+0.047871295 container create b6e1c8859722238f7d73db72cdb848159f42c39972f6b1873ce724904e42d17c (image=quay.io/ceph/ceph:v19, name=tender_varahamihira, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:50:35 np0005540825 systemd[1]: Started libpod-conmon-b6e1c8859722238f7d73db72cdb848159f42c39972f6b1873ce724904e42d17c.scope.
Dec  1 04:50:35 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:50:35 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v15: 194 pgs: 2 active+clean+scrubbing, 192 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Dec  1 04:50:35 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4a07b49608f87c0deb2cd9e9c6ed54eb4218d7db023489c5b76c077fce64940/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:35 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4a07b49608f87c0deb2cd9e9c6ed54eb4218d7db023489c5b76c077fce64940/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:35 np0005540825 podman[92807]: 2025-12-01 09:50:35.447297337 +0000 UTC m=+0.027665131 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:50:35 np0005540825 podman[92807]: 2025-12-01 09:50:35.550270227 +0000 UTC m=+0.130638011 container init b6e1c8859722238f7d73db72cdb848159f42c39972f6b1873ce724904e42d17c (image=quay.io/ceph/ceph:v19, name=tender_varahamihira, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:50:35 np0005540825 podman[92807]: 2025-12-01 09:50:35.555981578 +0000 UTC m=+0.136349352 container start b6e1c8859722238f7d73db72cdb848159f42c39972f6b1873ce724904e42d17c (image=quay.io/ceph/ceph:v19, name=tender_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  1 04:50:35 np0005540825 podman[92807]: 2025-12-01 09:50:35.560852066 +0000 UTC m=+0.141219860 container attach b6e1c8859722238f7d73db72cdb848159f42c39972f6b1873ce724904e42d17c (image=quay.io/ceph/ceph:v19, name=tender_varahamihira, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:50:35 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 6.1a scrub starts
Dec  1 04:50:35 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 6.1a scrub ok
Dec  1 04:50:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec  1 04:50:35 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4074645757' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  1 04:50:35 np0005540825 tender_varahamihira[92823]: 
Dec  1 04:50:35 np0005540825 tender_varahamihira[92823]: {"fsid":"365f19c2-81e5-5edd-b6b4-280555214d3a","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":66,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":47,"num_osds":3,"num_up_osds":3,"osd_up_since":1764582605,"num_in_osds":3,"osd_in_since":1764582579,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":192},{"state_name":"active+clean+scrubbing","count":2}],"num_pgs":194,"num_pools":8,"num_objects":3,"data_bytes":459280,"bytes_used":84373504,"bytes_avail":64327553024,"bytes_total":64411926528,"read_bytes_sec":22521,"write_bytes_sec":0,"read_op_per_sec":7,"write_op_per_sec":1},"fsmap":{"epoch":2,"btime":"2025-12-01T09:50:20:704588+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":3,"modified":"2025-12-01T09:49:34.718813+0000","services":{"mon":{"daemons":{"summary":"","compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"88adaad5-5c81-4215-9412-c607f656fe7e":{"message":"Updating node-exporter deployment (+2 -> 3) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Dec  1 04:50:36 np0005540825 systemd[1]: libpod-b6e1c8859722238f7d73db72cdb848159f42c39972f6b1873ce724904e42d17c.scope: Deactivated successfully.
Dec  1 04:50:36 np0005540825 podman[92807]: 2025-12-01 09:50:36.011438016 +0000 UTC m=+0.591805780 container died b6e1c8859722238f7d73db72cdb848159f42c39972f6b1873ce724904e42d17c (image=quay.io/ceph/ceph:v19, name=tender_varahamihira, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec  1 04:50:36 np0005540825 systemd[1]: var-lib-containers-storage-overlay-b4a07b49608f87c0deb2cd9e9c6ed54eb4218d7db023489c5b76c077fce64940-merged.mount: Deactivated successfully.
Dec  1 04:50:36 np0005540825 podman[92807]: 2025-12-01 09:50:36.047280993 +0000 UTC m=+0.627648767 container remove b6e1c8859722238f7d73db72cdb848159f42c39972f6b1873ce724904e42d17c (image=quay.io/ceph/ceph:v19, name=tender_varahamihira, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:50:36 np0005540825 systemd[1]: libpod-conmon-b6e1c8859722238f7d73db72cdb848159f42c39972f6b1873ce724904e42d17c.scope: Deactivated successfully.
Dec  1 04:50:36 np0005540825 python3[92885]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:50:36 np0005540825 podman[92886]: 2025-12-01 09:50:36.439737128 +0000 UTC m=+0.063743815 container create 5a02591fa0622c077d0bd2ae37452434c8448bc57b477e6f93a67dfb7fe64252 (image=quay.io/ceph/ceph:v19, name=naughty_williams, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec  1 04:50:36 np0005540825 systemd[1]: Started libpod-conmon-5a02591fa0622c077d0bd2ae37452434c8448bc57b477e6f93a67dfb7fe64252.scope.
Dec  1 04:50:36 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:50:36 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fdf55590e7892b6b1e18f4963b35f208e8ca0b74e39ee0f7bcb4239422b9c74/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:36 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fdf55590e7892b6b1e18f4963b35f208e8ca0b74e39ee0f7bcb4239422b9c74/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:36 np0005540825 podman[92886]: 2025-12-01 09:50:36.414215674 +0000 UTC m=+0.038222411 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:50:36 np0005540825 podman[92886]: 2025-12-01 09:50:36.519169436 +0000 UTC m=+0.143176113 container init 5a02591fa0622c077d0bd2ae37452434c8448bc57b477e6f93a67dfb7fe64252 (image=quay.io/ceph/ceph:v19, name=naughty_williams, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  1 04:50:36 np0005540825 podman[92886]: 2025-12-01 09:50:36.524828265 +0000 UTC m=+0.148834912 container start 5a02591fa0622c077d0bd2ae37452434c8448bc57b477e6f93a67dfb7fe64252 (image=quay.io/ceph/ceph:v19, name=naughty_williams, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default)
Dec  1 04:50:36 np0005540825 podman[92886]: 2025-12-01 09:50:36.528238435 +0000 UTC m=+0.152249032 container attach 5a02591fa0622c077d0bd2ae37452434c8448bc57b477e6f93a67dfb7fe64252 (image=quay.io/ceph/ceph:v19, name=naughty_williams, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:50:36 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  1 04:50:36 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/310913945' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  1 04:50:36 np0005540825 naughty_williams[92902]: 
Dec  1 04:50:36 np0005540825 naughty_williams[92902]: {"epoch":3,"fsid":"365f19c2-81e5-5edd-b6b4-280555214d3a","modified":"2025-12-01T09:49:23.596118Z","created":"2025-12-01T09:46:48.019470Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-2","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.102:3300","nonce":0},{"type":"v1","addr":"192.168.122.102:6789","nonce":0}]},"addr":"192.168.122.102:6789/0","public_addr":"192.168.122.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]}
Dec  1 04:50:36 np0005540825 naughty_williams[92902]: dumped monmap epoch 3
Dec  1 04:50:36 np0005540825 systemd[1]: libpod-5a02591fa0622c077d0bd2ae37452434c8448bc57b477e6f93a67dfb7fe64252.scope: Deactivated successfully.
Dec  1 04:50:36 np0005540825 conmon[92902]: conmon 5a02591fa0622c077d0b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5a02591fa0622c077d0bd2ae37452434c8448bc57b477e6f93a67dfb7fe64252.scope/container/memory.events
Dec  1 04:50:36 np0005540825 podman[92886]: 2025-12-01 09:50:36.975781235 +0000 UTC m=+0.599787972 container died 5a02591fa0622c077d0bd2ae37452434c8448bc57b477e6f93a67dfb7fe64252 (image=quay.io/ceph/ceph:v19, name=naughty_williams, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec  1 04:50:36 np0005540825 systemd[1]: var-lib-containers-storage-overlay-6fdf55590e7892b6b1e18f4963b35f208e8ca0b74e39ee0f7bcb4239422b9c74-merged.mount: Deactivated successfully.
Dec  1 04:50:37 np0005540825 podman[92886]: 2025-12-01 09:50:37.015139664 +0000 UTC m=+0.639146311 container remove 5a02591fa0622c077d0bd2ae37452434c8448bc57b477e6f93a67dfb7fe64252 (image=quay.io/ceph/ceph:v19, name=naughty_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec  1 04:50:37 np0005540825 systemd[1]: libpod-conmon-5a02591fa0622c077d0bd2ae37452434c8448bc57b477e6f93a67dfb7fe64252.scope: Deactivated successfully.
Dec  1 04:50:37 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v16: 194 pgs: 194 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Dec  1 04:50:37 np0005540825 python3[92964]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:50:37 np0005540825 podman[92965]: 2025-12-01 09:50:37.65833325 +0000 UTC m=+0.041778374 container create a6c9665a79558828f3c77d2400f6188ae9f9d3904e0efe89ff43ad6ba0d1ee47 (image=quay.io/ceph/ceph:v19, name=fervent_williams, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:50:37 np0005540825 systemd[1]: Started libpod-conmon-a6c9665a79558828f3c77d2400f6188ae9f9d3904e0efe89ff43ad6ba0d1ee47.scope.
Dec  1 04:50:37 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:50:37 np0005540825 podman[92965]: 2025-12-01 09:50:37.636866913 +0000 UTC m=+0.020312057 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:50:37 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2959ebddf64a62d36006b25b76c864a3a9313b70211c8b1e3c21144a8300e49/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:37 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2959ebddf64a62d36006b25b76c864a3a9313b70211c8b1e3c21144a8300e49/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:37 np0005540825 podman[92965]: 2025-12-01 09:50:37.745080781 +0000 UTC m=+0.128525935 container init a6c9665a79558828f3c77d2400f6188ae9f9d3904e0efe89ff43ad6ba0d1ee47 (image=quay.io/ceph/ceph:v19, name=fervent_williams, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec  1 04:50:37 np0005540825 podman[92965]: 2025-12-01 09:50:37.751681246 +0000 UTC m=+0.135126370 container start a6c9665a79558828f3c77d2400f6188ae9f9d3904e0efe89ff43ad6ba0d1ee47 (image=quay.io/ceph/ceph:v19, name=fervent_williams, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:50:37 np0005540825 podman[92965]: 2025-12-01 09:50:37.755597779 +0000 UTC m=+0.139042923 container attach a6c9665a79558828f3c77d2400f6188ae9f9d3904e0efe89ff43ad6ba0d1ee47 (image=quay.io/ceph/ceph:v19, name=fervent_williams, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec  1 04:50:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  1 04:50:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:50:38 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Dec  1 04:50:38 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/176832347' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Dec  1 04:50:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  1 04:50:38 np0005540825 fervent_williams[92980]: [client.openstack]
Dec  1 04:50:38 np0005540825 fervent_williams[92980]: #011key = AQDkYy1pAAAAABAAkbJz0WufsOiiJsVlIdW4cg==
Dec  1 04:50:38 np0005540825 fervent_williams[92980]: #011caps mgr = "allow *"
Dec  1 04:50:38 np0005540825 fervent_williams[92980]: #011caps mon = "profile rbd"
Dec  1 04:50:38 np0005540825 fervent_williams[92980]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Dec  1 04:50:38 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec  1 04:50:38 np0005540825 systemd[1]: libpod-a6c9665a79558828f3c77d2400f6188ae9f9d3904e0efe89ff43ad6ba0d1ee47.scope: Deactivated successfully.
Dec  1 04:50:38 np0005540825 podman[92965]: 2025-12-01 09:50:38.181659841 +0000 UTC m=+0.565104965 container died a6c9665a79558828f3c77d2400f6188ae9f9d3904e0efe89ff43ad6ba0d1ee47 (image=quay.io/ceph/ceph:v19, name=fervent_williams, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:50:38 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:38 np0005540825 ceph-mgr[74709]: [progress INFO root] complete: finished ev 88adaad5-5c81-4215-9412-c607f656fe7e (Updating node-exporter deployment (+2 -> 3))
Dec  1 04:50:38 np0005540825 ceph-mgr[74709]: [progress INFO root] Completed event 88adaad5-5c81-4215-9412-c607f656fe7e (Updating node-exporter deployment (+2 -> 3)) in 6 seconds
Dec  1 04:50:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec  1 04:50:38 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 04:50:38 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 04:50:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 04:50:38 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 04:50:38 np0005540825 systemd[1]: var-lib-containers-storage-overlay-e2959ebddf64a62d36006b25b76c864a3a9313b70211c8b1e3c21144a8300e49-merged.mount: Deactivated successfully.
Dec  1 04:50:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:50:38 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:50:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 04:50:38 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 04:50:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:50:38 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:50:38 np0005540825 podman[92965]: 2025-12-01 09:50:38.225688864 +0000 UTC m=+0.609133988 container remove a6c9665a79558828f3c77d2400f6188ae9f9d3904e0efe89ff43ad6ba0d1ee47 (image=quay.io/ceph/ceph:v19, name=fervent_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 04:50:38 np0005540825 systemd[1]: libpod-conmon-a6c9665a79558828f3c77d2400f6188ae9f9d3904e0efe89ff43ad6ba0d1ee47.scope: Deactivated successfully.
Dec  1 04:50:38 np0005540825 ceph-mon[74416]: log_channel(cluster) log [WRN] : Health check failed: 1 OSD(s) experiencing slow operations in BlueStore (BLUESTORE_SLOW_OP_ALERT)
Dec  1 04:50:38 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:38 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/176832347' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Dec  1 04:50:38 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:38 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:38 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:38 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 04:50:38 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 04:50:38 np0005540825 podman[93105]: 2025-12-01 09:50:38.737486781 +0000 UTC m=+0.053705960 container create c7dc56919e1228caa4261d81dddad169eebc3e49a58e7030f6bf5ff0df560ab3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_banach, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:50:38 np0005540825 systemd[1]: Started libpod-conmon-c7dc56919e1228caa4261d81dddad169eebc3e49a58e7030f6bf5ff0df560ab3.scope.
Dec  1 04:50:38 np0005540825 podman[93105]: 2025-12-01 09:50:38.709978294 +0000 UTC m=+0.026197563 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:50:38 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:50:38 np0005540825 podman[93105]: 2025-12-01 09:50:38.825142736 +0000 UTC m=+0.141361935 container init c7dc56919e1228caa4261d81dddad169eebc3e49a58e7030f6bf5ff0df560ab3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_banach, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True)
Dec  1 04:50:38 np0005540825 podman[93105]: 2025-12-01 09:50:38.834703788 +0000 UTC m=+0.150922967 container start c7dc56919e1228caa4261d81dddad169eebc3e49a58e7030f6bf5ff0df560ab3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_banach, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec  1 04:50:38 np0005540825 podman[93105]: 2025-12-01 09:50:38.838139139 +0000 UTC m=+0.154358358 container attach c7dc56919e1228caa4261d81dddad169eebc3e49a58e7030f6bf5ff0df560ab3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_banach, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec  1 04:50:38 np0005540825 zen_banach[93121]: 167 167
Dec  1 04:50:38 np0005540825 systemd[1]: libpod-c7dc56919e1228caa4261d81dddad169eebc3e49a58e7030f6bf5ff0df560ab3.scope: Deactivated successfully.
Dec  1 04:50:38 np0005540825 podman[93105]: 2025-12-01 09:50:38.841695763 +0000 UTC m=+0.157914972 container died c7dc56919e1228caa4261d81dddad169eebc3e49a58e7030f6bf5ff0df560ab3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_banach, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:50:38 np0005540825 systemd[1]: var-lib-containers-storage-overlay-76f993815183b3960410fad80573b5697b0047bb9a70e14338b68274085ef6e0-merged.mount: Deactivated successfully.
Dec  1 04:50:38 np0005540825 podman[93105]: 2025-12-01 09:50:38.886667181 +0000 UTC m=+0.202886360 container remove c7dc56919e1228caa4261d81dddad169eebc3e49a58e7030f6bf5ff0df560ab3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec  1 04:50:38 np0005540825 systemd[1]: libpod-conmon-c7dc56919e1228caa4261d81dddad169eebc3e49a58e7030f6bf5ff0df560ab3.scope: Deactivated successfully.
Dec  1 04:50:39 np0005540825 podman[93146]: 2025-12-01 09:50:39.067084175 +0000 UTC m=+0.058993259 container create 534bfbdc36041598ca150d49a900d960bc302f8da89eb44cc7c14a3340bbf5ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:50:39 np0005540825 systemd[1]: Started libpod-conmon-534bfbdc36041598ca150d49a900d960bc302f8da89eb44cc7c14a3340bbf5ce.scope.
Dec  1 04:50:39 np0005540825 podman[93146]: 2025-12-01 09:50:39.039451686 +0000 UTC m=+0.031360830 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:50:39 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:50:39 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9e706c2fb15d54dcee217a587d5b4ac8573a3e0a57fe923eb8692d0b0487fa1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:39 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9e706c2fb15d54dcee217a587d5b4ac8573a3e0a57fe923eb8692d0b0487fa1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:39 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9e706c2fb15d54dcee217a587d5b4ac8573a3e0a57fe923eb8692d0b0487fa1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:39 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9e706c2fb15d54dcee217a587d5b4ac8573a3e0a57fe923eb8692d0b0487fa1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:39 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9e706c2fb15d54dcee217a587d5b4ac8573a3e0a57fe923eb8692d0b0487fa1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:39 np0005540825 podman[93146]: 2025-12-01 09:50:39.159900807 +0000 UTC m=+0.151809891 container init 534bfbdc36041598ca150d49a900d960bc302f8da89eb44cc7c14a3340bbf5ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_kalam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:50:39 np0005540825 podman[93146]: 2025-12-01 09:50:39.177089361 +0000 UTC m=+0.168998435 container start 534bfbdc36041598ca150d49a900d960bc302f8da89eb44cc7c14a3340bbf5ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_kalam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  1 04:50:39 np0005540825 podman[93146]: 2025-12-01 09:50:39.181727713 +0000 UTC m=+0.173636787 container attach 534bfbdc36041598ca150d49a900d960bc302f8da89eb44cc7c14a3340bbf5ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_kalam, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  1 04:50:39 np0005540825 ceph-mon[74416]: Health check failed: 1 OSD(s) experiencing slow operations in BlueStore (BLUESTORE_SLOW_OP_ALERT)
Dec  1 04:50:39 np0005540825 elastic_kalam[93162]: --> passed data devices: 0 physical, 1 LVM
Dec  1 04:50:39 np0005540825 elastic_kalam[93162]: --> All data devices are unavailable
Dec  1 04:50:39 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v17: 194 pgs: 194 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Dec  1 04:50:39 np0005540825 systemd[1]: libpod-534bfbdc36041598ca150d49a900d960bc302f8da89eb44cc7c14a3340bbf5ce.scope: Deactivated successfully.
Dec  1 04:50:39 np0005540825 podman[93146]: 2025-12-01 09:50:39.537776516 +0000 UTC m=+0.529685600 container died 534bfbdc36041598ca150d49a900d960bc302f8da89eb44cc7c14a3340bbf5ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_kalam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Dec  1 04:50:39 np0005540825 systemd[1]: var-lib-containers-storage-overlay-b9e706c2fb15d54dcee217a587d5b4ac8573a3e0a57fe923eb8692d0b0487fa1-merged.mount: Deactivated successfully.
Dec  1 04:50:39 np0005540825 podman[93146]: 2025-12-01 09:50:39.590426277 +0000 UTC m=+0.582335361 container remove 534bfbdc36041598ca150d49a900d960bc302f8da89eb44cc7c14a3340bbf5ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_kalam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  1 04:50:39 np0005540825 ceph-mgr[74709]: [progress INFO root] Writing back 14 completed events
Dec  1 04:50:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  1 04:50:39 np0005540825 systemd[1]: libpod-conmon-534bfbdc36041598ca150d49a900d960bc302f8da89eb44cc7c14a3340bbf5ce.scope: Deactivated successfully.
Dec  1 04:50:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:39 np0005540825 ansible-async_wrapper.py[93337]: Invoked with j516251280274 30 /home/zuul/.ansible/tmp/ansible-tmp-1764582639.2523723-37509-204443744213009/AnsiballZ_command.py _
Dec  1 04:50:39 np0005540825 ansible-async_wrapper.py[93390]: Starting module and watcher
Dec  1 04:50:39 np0005540825 ansible-async_wrapper.py[93390]: Start watching 93391 (30)
Dec  1 04:50:39 np0005540825 ansible-async_wrapper.py[93391]: Start module (93391)
Dec  1 04:50:39 np0005540825 ansible-async_wrapper.py[93337]: Return async_wrapper task started.
Dec  1 04:50:39 np0005540825 python3[93392]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:50:40 np0005540825 podman[93393]: 2025-12-01 09:50:39.977426828 +0000 UTC m=+0.038860498 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:50:40 np0005540825 podman[93393]: 2025-12-01 09:50:40.432217499 +0000 UTC m=+0.493651079 container create bc25a16330bc84a1793652ba6e87a13fd85ed3bacf78ebc27ffdc655540a7cc2 (image=quay.io/ceph/ceph:v19, name=peaceful_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  1 04:50:40 np0005540825 systemd[1]: Started libpod-conmon-bc25a16330bc84a1793652ba6e87a13fd85ed3bacf78ebc27ffdc655540a7cc2.scope.
Dec  1 04:50:40 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:50:40 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d36ba405ca08ed9ab9d7e35ee7c78d51895d02fdb5c975899720311d7406676/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:40 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d36ba405ca08ed9ab9d7e35ee7c78d51895d02fdb5c975899720311d7406676/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:40 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:40 np0005540825 podman[93393]: 2025-12-01 09:50:40.84122583 +0000 UTC m=+0.902659420 container init bc25a16330bc84a1793652ba6e87a13fd85ed3bacf78ebc27ffdc655540a7cc2 (image=quay.io/ceph/ceph:v19, name=peaceful_khayyam, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  1 04:50:40 np0005540825 podman[93393]: 2025-12-01 09:50:40.855450945 +0000 UTC m=+0.916884515 container start bc25a16330bc84a1793652ba6e87a13fd85ed3bacf78ebc27ffdc655540a7cc2 (image=quay.io/ceph/ceph:v19, name=peaceful_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True)
Dec  1 04:50:41 np0005540825 python3[93493]: ansible-ansible.legacy.async_status Invoked with jid=j516251280274.93337 mode=status _async_dir=/root/.ansible_async
Dec  1 04:50:41 np0005540825 podman[93393]: 2025-12-01 09:50:41.184755662 +0000 UTC m=+1.246189242 container attach bc25a16330bc84a1793652ba6e87a13fd85ed3bacf78ebc27ffdc655540a7cc2 (image=quay.io/ceph/ceph:v19, name=peaceful_khayyam, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  1 04:50:41 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.14466 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  1 04:50:41 np0005540825 peaceful_khayyam[93431]: 
Dec  1 04:50:41 np0005540825 peaceful_khayyam[93431]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  1 04:50:41 np0005540825 systemd[1]: libpod-bc25a16330bc84a1793652ba6e87a13fd85ed3bacf78ebc27ffdc655540a7cc2.scope: Deactivated successfully.
Dec  1 04:50:41 np0005540825 conmon[93431]: conmon bc25a16330bc84a17936 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bc25a16330bc84a1793652ba6e87a13fd85ed3bacf78ebc27ffdc655540a7cc2.scope/container/memory.events
Dec  1 04:50:41 np0005540825 podman[93393]: 2025-12-01 09:50:41.279790042 +0000 UTC m=+1.341223622 container died bc25a16330bc84a1793652ba6e87a13fd85ed3bacf78ebc27ffdc655540a7cc2 (image=quay.io/ceph/ceph:v19, name=peaceful_khayyam, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:50:41 np0005540825 systemd[1]: var-lib-containers-storage-overlay-0d36ba405ca08ed9ab9d7e35ee7c78d51895d02fdb5c975899720311d7406676-merged.mount: Deactivated successfully.
Dec  1 04:50:41 np0005540825 podman[93393]: 2025-12-01 09:50:41.339049357 +0000 UTC m=+1.400482927 container remove bc25a16330bc84a1793652ba6e87a13fd85ed3bacf78ebc27ffdc655540a7cc2 (image=quay.io/ceph/ceph:v19, name=peaceful_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  1 04:50:41 np0005540825 systemd[1]: libpod-conmon-bc25a16330bc84a1793652ba6e87a13fd85ed3bacf78ebc27ffdc655540a7cc2.scope: Deactivated successfully.
Dec  1 04:50:41 np0005540825 ansible-async_wrapper.py[93391]: Module complete (93391)
Dec  1 04:50:41 np0005540825 podman[93529]: 2025-12-01 09:50:41.395727534 +0000 UTC m=+0.022105415 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:50:41 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v18: 194 pgs: 194 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  1 04:50:41 np0005540825 podman[93529]: 2025-12-01 09:50:41.537997521 +0000 UTC m=+0.164375392 container create 3ea1be701da175695b81248120b448d647fd676c4f91e5dd3c35745132a54d7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  1 04:50:41 np0005540825 systemd[1]: Started libpod-conmon-3ea1be701da175695b81248120b448d647fd676c4f91e5dd3c35745132a54d7b.scope.
Dec  1 04:50:41 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:50:41 np0005540825 podman[93529]: 2025-12-01 09:50:41.625602695 +0000 UTC m=+0.251980576 container init 3ea1be701da175695b81248120b448d647fd676c4f91e5dd3c35745132a54d7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:50:41 np0005540825 podman[93529]: 2025-12-01 09:50:41.63337558 +0000 UTC m=+0.259753451 container start 3ea1be701da175695b81248120b448d647fd676c4f91e5dd3c35745132a54d7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_volhard, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:50:41 np0005540825 podman[93529]: 2025-12-01 09:50:41.637954901 +0000 UTC m=+0.264332812 container attach 3ea1be701da175695b81248120b448d647fd676c4f91e5dd3c35745132a54d7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:50:41 np0005540825 naughty_volhard[93545]: 167 167
Dec  1 04:50:41 np0005540825 systemd[1]: libpod-3ea1be701da175695b81248120b448d647fd676c4f91e5dd3c35745132a54d7b.scope: Deactivated successfully.
Dec  1 04:50:41 np0005540825 podman[93529]: 2025-12-01 09:50:41.63980268 +0000 UTC m=+0.266180551 container died 3ea1be701da175695b81248120b448d647fd676c4f91e5dd3c35745132a54d7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_volhard, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  1 04:50:41 np0005540825 systemd[1]: var-lib-containers-storage-overlay-d475eb4e3be7216b4f24778417dcbeb365a31b9d4dc729782d140b4a3cc4c7ad-merged.mount: Deactivated successfully.
Dec  1 04:50:41 np0005540825 podman[93529]: 2025-12-01 09:50:41.688582888 +0000 UTC m=+0.314960759 container remove 3ea1be701da175695b81248120b448d647fd676c4f91e5dd3c35745132a54d7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_volhard, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Dec  1 04:50:41 np0005540825 systemd[1]: libpod-conmon-3ea1be701da175695b81248120b448d647fd676c4f91e5dd3c35745132a54d7b.scope: Deactivated successfully.
Dec  1 04:50:41 np0005540825 podman[93569]: 2025-12-01 09:50:41.891687512 +0000 UTC m=+0.056561454 container create 90d31b657bbe0fca92c182902fc15eb2cc09eaeef61be072caf46b797832c536 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_davinci, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Dec  1 04:50:41 np0005540825 systemd[1]: Started libpod-conmon-90d31b657bbe0fca92c182902fc15eb2cc09eaeef61be072caf46b797832c536.scope.
Dec  1 04:50:41 np0005540825 podman[93569]: 2025-12-01 09:50:41.862155032 +0000 UTC m=+0.027028984 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:50:41 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:50:41 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5cf7ee345ccb9b0180a47036e7961f0da7c92f4754ee9f227f3edbd290f0666/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:41 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5cf7ee345ccb9b0180a47036e7961f0da7c92f4754ee9f227f3edbd290f0666/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:41 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5cf7ee345ccb9b0180a47036e7961f0da7c92f4754ee9f227f3edbd290f0666/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:41 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5cf7ee345ccb9b0180a47036e7961f0da7c92f4754ee9f227f3edbd290f0666/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:42 np0005540825 podman[93569]: 2025-12-01 09:50:42.019331133 +0000 UTC m=+0.184205075 container init 90d31b657bbe0fca92c182902fc15eb2cc09eaeef61be072caf46b797832c536 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_davinci, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:50:42 np0005540825 podman[93569]: 2025-12-01 09:50:42.029950844 +0000 UTC m=+0.194824766 container start 90d31b657bbe0fca92c182902fc15eb2cc09eaeef61be072caf46b797832c536 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1)
Dec  1 04:50:42 np0005540825 podman[93569]: 2025-12-01 09:50:42.06992542 +0000 UTC m=+0.234799372 container attach 90d31b657bbe0fca92c182902fc15eb2cc09eaeef61be072caf46b797832c536 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_davinci, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  1 04:50:42 np0005540825 adoring_davinci[93585]: {
Dec  1 04:50:42 np0005540825 adoring_davinci[93585]:    "1": [
Dec  1 04:50:42 np0005540825 adoring_davinci[93585]:        {
Dec  1 04:50:42 np0005540825 adoring_davinci[93585]:            "devices": [
Dec  1 04:50:42 np0005540825 adoring_davinci[93585]:                "/dev/loop3"
Dec  1 04:50:42 np0005540825 adoring_davinci[93585]:            ],
Dec  1 04:50:42 np0005540825 adoring_davinci[93585]:            "lv_name": "ceph_lv0",
Dec  1 04:50:42 np0005540825 adoring_davinci[93585]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 04:50:42 np0005540825 adoring_davinci[93585]:            "lv_size": "21470642176",
Dec  1 04:50:42 np0005540825 adoring_davinci[93585]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 04:50:42 np0005540825 adoring_davinci[93585]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 04:50:42 np0005540825 adoring_davinci[93585]:            "name": "ceph_lv0",
Dec  1 04:50:42 np0005540825 adoring_davinci[93585]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 04:50:42 np0005540825 adoring_davinci[93585]:            "tags": {
Dec  1 04:50:42 np0005540825 adoring_davinci[93585]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 04:50:42 np0005540825 adoring_davinci[93585]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 04:50:42 np0005540825 adoring_davinci[93585]:                "ceph.cephx_lockbox_secret": "",
Dec  1 04:50:42 np0005540825 adoring_davinci[93585]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 04:50:42 np0005540825 adoring_davinci[93585]:                "ceph.cluster_name": "ceph",
Dec  1 04:50:42 np0005540825 adoring_davinci[93585]:                "ceph.crush_device_class": "",
Dec  1 04:50:42 np0005540825 adoring_davinci[93585]:                "ceph.encrypted": "0",
Dec  1 04:50:42 np0005540825 adoring_davinci[93585]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 04:50:42 np0005540825 adoring_davinci[93585]:                "ceph.osd_id": "1",
Dec  1 04:50:42 np0005540825 adoring_davinci[93585]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 04:50:42 np0005540825 adoring_davinci[93585]:                "ceph.type": "block",
Dec  1 04:50:42 np0005540825 adoring_davinci[93585]:                "ceph.vdo": "0",
Dec  1 04:50:42 np0005540825 adoring_davinci[93585]:                "ceph.with_tpm": "0"
Dec  1 04:50:42 np0005540825 adoring_davinci[93585]:            },
Dec  1 04:50:42 np0005540825 adoring_davinci[93585]:            "type": "block",
Dec  1 04:50:42 np0005540825 adoring_davinci[93585]:            "vg_name": "ceph_vg0"
Dec  1 04:50:42 np0005540825 adoring_davinci[93585]:        }
Dec  1 04:50:42 np0005540825 adoring_davinci[93585]:    ]
Dec  1 04:50:42 np0005540825 adoring_davinci[93585]: }
Dec  1 04:50:42 np0005540825 systemd[1]: libpod-90d31b657bbe0fca92c182902fc15eb2cc09eaeef61be072caf46b797832c536.scope: Deactivated successfully.
Dec  1 04:50:42 np0005540825 podman[93569]: 2025-12-01 09:50:42.361209712 +0000 UTC m=+0.526083654 container died 90d31b657bbe0fca92c182902fc15eb2cc09eaeef61be072caf46b797832c536 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_davinci, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  1 04:50:42 np0005540825 systemd[1]: var-lib-containers-storage-overlay-c5cf7ee345ccb9b0180a47036e7961f0da7c92f4754ee9f227f3edbd290f0666-merged.mount: Deactivated successfully.
Dec  1 04:50:42 np0005540825 podman[93569]: 2025-12-01 09:50:42.418118255 +0000 UTC m=+0.582992177 container remove 90d31b657bbe0fca92c182902fc15eb2cc09eaeef61be072caf46b797832c536 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:50:42 np0005540825 python3[93640]: ansible-ansible.legacy.async_status Invoked with jid=j516251280274.93337 mode=status _async_dir=/root/.ansible_async
Dec  1 04:50:42 np0005540825 systemd[1]: libpod-conmon-90d31b657bbe0fca92c182902fc15eb2cc09eaeef61be072caf46b797832c536.scope: Deactivated successfully.
Dec  1 04:50:42 np0005540825 python3[93750]: ansible-ansible.legacy.async_status Invoked with jid=j516251280274.93337 mode=cleanup _async_dir=/root/.ansible_async
Dec  1 04:50:42 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  1 04:50:42 np0005540825 podman[93792]: 2025-12-01 09:50:42.965196904 +0000 UTC m=+0.042876244 container create 64e61712d1bd4165b8d3a663f1fa8e177bb80dbf850841252e8c1036ca5a2e5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2)
Dec  1 04:50:42 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:42 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  1 04:50:42 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:43 np0005540825 systemd[1]: Started libpod-conmon-64e61712d1bd4165b8d3a663f1fa8e177bb80dbf850841252e8c1036ca5a2e5e.scope.
Dec  1 04:50:43 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:50:43 np0005540825 podman[93792]: 2025-12-01 09:50:42.947461805 +0000 UTC m=+0.025141165 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:50:43 np0005540825 podman[93792]: 2025-12-01 09:50:43.055525799 +0000 UTC m=+0.133205239 container init 64e61712d1bd4165b8d3a663f1fa8e177bb80dbf850841252e8c1036ca5a2e5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_dhawan, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:50:43 np0005540825 podman[93792]: 2025-12-01 09:50:43.064194418 +0000 UTC m=+0.141873758 container start 64e61712d1bd4165b8d3a663f1fa8e177bb80dbf850841252e8c1036ca5a2e5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  1 04:50:43 np0005540825 podman[93792]: 2025-12-01 09:50:43.067907336 +0000 UTC m=+0.145586676 container attach 64e61712d1bd4165b8d3a663f1fa8e177bb80dbf850841252e8c1036ca5a2e5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:50:43 np0005540825 epic_dhawan[93808]: 167 167
Dec  1 04:50:43 np0005540825 systemd[1]: libpod-64e61712d1bd4165b8d3a663f1fa8e177bb80dbf850841252e8c1036ca5a2e5e.scope: Deactivated successfully.
Dec  1 04:50:43 np0005540825 podman[93792]: 2025-12-01 09:50:43.070457894 +0000 UTC m=+0.148137234 container died 64e61712d1bd4165b8d3a663f1fa8e177bb80dbf850841252e8c1036ca5a2e5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec  1 04:50:43 np0005540825 systemd[1]: var-lib-containers-storage-overlay-abe3838c75bbf02dc45dbebe81b7ac80a67b94a845bc8678281643980c74d268-merged.mount: Deactivated successfully.
Dec  1 04:50:43 np0005540825 podman[93792]: 2025-12-01 09:50:43.108840707 +0000 UTC m=+0.186520057 container remove 64e61712d1bd4165b8d3a663f1fa8e177bb80dbf850841252e8c1036ca5a2e5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_dhawan, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:50:43 np0005540825 systemd[1]: libpod-conmon-64e61712d1bd4165b8d3a663f1fa8e177bb80dbf850841252e8c1036ca5a2e5e.scope: Deactivated successfully.
Dec  1 04:50:43 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:50:43 np0005540825 podman[93857]: 2025-12-01 09:50:43.269671155 +0000 UTC m=+0.051005628 container create 07e25b4628598c9922c56f63d9ac5e30f084bfdeb3faf598086fbf2d7c93f054 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_benz, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec  1 04:50:43 np0005540825 systemd[1]: Started libpod-conmon-07e25b4628598c9922c56f63d9ac5e30f084bfdeb3faf598086fbf2d7c93f054.scope.
Dec  1 04:50:43 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:50:43 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83f7b718d54c35a9f97dd3306819e4bbb32c934b8b6be2df36ad632be64b50a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:43 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83f7b718d54c35a9f97dd3306819e4bbb32c934b8b6be2df36ad632be64b50a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:43 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83f7b718d54c35a9f97dd3306819e4bbb32c934b8b6be2df36ad632be64b50a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:43 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83f7b718d54c35a9f97dd3306819e4bbb32c934b8b6be2df36ad632be64b50a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:43 np0005540825 podman[93857]: 2025-12-01 09:50:43.24599684 +0000 UTC m=+0.027331333 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:50:43 np0005540825 podman[93857]: 2025-12-01 09:50:43.354993918 +0000 UTC m=+0.136328411 container init 07e25b4628598c9922c56f63d9ac5e30f084bfdeb3faf598086fbf2d7c93f054 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec  1 04:50:43 np0005540825 podman[93857]: 2025-12-01 09:50:43.366080291 +0000 UTC m=+0.147414794 container start 07e25b4628598c9922c56f63d9ac5e30f084bfdeb3faf598086fbf2d7c93f054 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_benz, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:50:43 np0005540825 python3[93856]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:50:43 np0005540825 podman[93857]: 2025-12-01 09:50:43.371864134 +0000 UTC m=+0.153198627 container attach 07e25b4628598c9922c56f63d9ac5e30f084bfdeb3faf598086fbf2d7c93f054 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_benz, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  1 04:50:43 np0005540825 podman[93878]: 2025-12-01 09:50:43.431680704 +0000 UTC m=+0.046548471 container create 9a9981e0d407eb186cad21b6e9c56d08133fe7b1003509624061fcb2c08c2f80 (image=quay.io/ceph/ceph:v19, name=charming_mendel, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:50:43 np0005540825 systemd[1]: Started libpod-conmon-9a9981e0d407eb186cad21b6e9c56d08133fe7b1003509624061fcb2c08c2f80.scope.
Dec  1 04:50:43 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:50:43 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57ac53ab9221b4989ab17a17d96dda08aed9b9fbf5b76e41ed812d7854447d04/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:43 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57ac53ab9221b4989ab17a17d96dda08aed9b9fbf5b76e41ed812d7854447d04/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:43 np0005540825 podman[93878]: 2025-12-01 09:50:43.506030317 +0000 UTC m=+0.120898094 container init 9a9981e0d407eb186cad21b6e9c56d08133fe7b1003509624061fcb2c08c2f80 (image=quay.io/ceph/ceph:v19, name=charming_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:50:43 np0005540825 podman[93878]: 2025-12-01 09:50:43.410649358 +0000 UTC m=+0.025517135 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:50:43 np0005540825 podman[93878]: 2025-12-01 09:50:43.513527525 +0000 UTC m=+0.128395282 container start 9a9981e0d407eb186cad21b6e9c56d08133fe7b1003509624061fcb2c08c2f80 (image=quay.io/ceph/ceph:v19, name=charming_mendel, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec  1 04:50:43 np0005540825 podman[93878]: 2025-12-01 09:50:43.517267884 +0000 UTC m=+0.132135661 container attach 9a9981e0d407eb186cad21b6e9c56d08133fe7b1003509624061fcb2c08c2f80 (image=quay.io/ceph/ceph:v19, name=charming_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:50:43 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v19: 194 pgs: 194 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  1 04:50:43 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.14472 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  1 04:50:43 np0005540825 charming_mendel[93893]: 
Dec  1 04:50:43 np0005540825 charming_mendel[93893]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  1 04:50:43 np0005540825 systemd[1]: libpod-9a9981e0d407eb186cad21b6e9c56d08133fe7b1003509624061fcb2c08c2f80.scope: Deactivated successfully.
Dec  1 04:50:43 np0005540825 podman[93878]: 2025-12-01 09:50:43.91316234 +0000 UTC m=+0.528030107 container died 9a9981e0d407eb186cad21b6e9c56d08133fe7b1003509624061fcb2c08c2f80 (image=quay.io/ceph/ceph:v19, name=charming_mendel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  1 04:50:44 np0005540825 lvm[93999]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 04:50:44 np0005540825 lvm[93999]: VG ceph_vg0 finished
Dec  1 04:50:44 np0005540825 goofy_benz[93873]: {}
Dec  1 04:50:44 np0005540825 systemd[1]: libpod-07e25b4628598c9922c56f63d9ac5e30f084bfdeb3faf598086fbf2d7c93f054.scope: Deactivated successfully.
Dec  1 04:50:44 np0005540825 systemd[1]: libpod-07e25b4628598c9922c56f63d9ac5e30f084bfdeb3faf598086fbf2d7c93f054.scope: Consumed 1.237s CPU time.
Dec  1 04:50:44 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:44 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:44 np0005540825 systemd[1]: var-lib-containers-storage-overlay-57ac53ab9221b4989ab17a17d96dda08aed9b9fbf5b76e41ed812d7854447d04-merged.mount: Deactivated successfully.
Dec  1 04:50:44 np0005540825 podman[93878]: 2025-12-01 09:50:44.754584241 +0000 UTC m=+1.369452008 container remove 9a9981e0d407eb186cad21b6e9c56d08133fe7b1003509624061fcb2c08c2f80 (image=quay.io/ceph/ceph:v19, name=charming_mendel, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  1 04:50:44 np0005540825 podman[93857]: 2025-12-01 09:50:44.766349511 +0000 UTC m=+1.547683984 container died 07e25b4628598c9922c56f63d9ac5e30f084bfdeb3faf598086fbf2d7c93f054 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_benz, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  1 04:50:44 np0005540825 ansible-async_wrapper.py[93390]: Done in kid B.
Dec  1 04:50:44 np0005540825 systemd[1]: libpod-conmon-9a9981e0d407eb186cad21b6e9c56d08133fe7b1003509624061fcb2c08c2f80.scope: Deactivated successfully.
Dec  1 04:50:44 np0005540825 systemd[1]: var-lib-containers-storage-overlay-83f7b718d54c35a9f97dd3306819e4bbb32c934b8b6be2df36ad632be64b50a6-merged.mount: Deactivated successfully.
Dec  1 04:50:44 np0005540825 podman[94003]: 2025-12-01 09:50:44.818493589 +0000 UTC m=+0.660270078 container remove 07e25b4628598c9922c56f63d9ac5e30f084bfdeb3faf598086fbf2d7c93f054 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_benz, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:50:44 np0005540825 systemd[1]: libpod-conmon-07e25b4628598c9922c56f63d9ac5e30f084bfdeb3faf598086fbf2d7c93f054.scope: Deactivated successfully.
Dec  1 04:50:44 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:50:44 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:44 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:50:44 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:44 np0005540825 ceph-mgr[74709]: [progress INFO root] update: starting ev 392e4f84-981d-4c70-8f89-7ae66a34c6f7 (Updating rgw.rgw deployment (+3 -> 3))
Dec  1 04:50:44 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.ugomkp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec  1 04:50:44 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.ugomkp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  1 04:50:44 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.ugomkp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  1 04:50:44 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Dec  1 04:50:44 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:44 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:50:44 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:50:44 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.ugomkp on compute-2
Dec  1 04:50:44 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.ugomkp on compute-2
Dec  1 04:50:45 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v20: 194 pgs: 194 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  1 04:50:45 np0005540825 python3[94043]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:50:45 np0005540825 podman[94044]: 2025-12-01 09:50:45.834793909 +0000 UTC m=+0.050890065 container create d30289115c6d4b5418ebd52a9e42459e7f661f6abc23be2766aaecd91c18643a (image=quay.io/ceph/ceph:v19, name=kind_roentgen, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  1 04:50:45 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:45 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:45 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.ugomkp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  1 04:50:45 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.ugomkp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  1 04:50:45 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:45 np0005540825 systemd[1]: Started libpod-conmon-d30289115c6d4b5418ebd52a9e42459e7f661f6abc23be2766aaecd91c18643a.scope.
Dec  1 04:50:45 np0005540825 podman[94044]: 2025-12-01 09:50:45.812024538 +0000 UTC m=+0.028120714 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:50:45 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:50:45 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1d6b4bee0395cc138551328e318e813396659f92d2083dfa8ff16e0d07fb1f5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:45 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1d6b4bee0395cc138551328e318e813396659f92d2083dfa8ff16e0d07fb1f5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:45 np0005540825 podman[94044]: 2025-12-01 09:50:45.934477462 +0000 UTC m=+0.150573638 container init d30289115c6d4b5418ebd52a9e42459e7f661f6abc23be2766aaecd91c18643a (image=quay.io/ceph/ceph:v19, name=kind_roentgen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  1 04:50:45 np0005540825 podman[94044]: 2025-12-01 09:50:45.940651825 +0000 UTC m=+0.156747971 container start d30289115c6d4b5418ebd52a9e42459e7f661f6abc23be2766aaecd91c18643a (image=quay.io/ceph/ceph:v19, name=kind_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:50:45 np0005540825 podman[94044]: 2025-12-01 09:50:45.943754867 +0000 UTC m=+0.159851023 container attach d30289115c6d4b5418ebd52a9e42459e7f661f6abc23be2766aaecd91c18643a (image=quay.io/ceph/ceph:v19, name=kind_roentgen, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Dec  1 04:50:46 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.14478 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  1 04:50:46 np0005540825 kind_roentgen[94059]: 
Dec  1 04:50:46 np0005540825 kind_roentgen[94059]: [{"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "alertmanager", "service_type": "alertmanager"}, {"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "nfs.cephfs", "service_name": "ingress.nfs.cephfs", "service_type": "ingress", "spec": {"backend_service": "nfs.cephfs", "enable_haproxy_protocol": true, "first_virtual_router_id": 50, "frontend_port": 2049, "monitor_port": 9049, "virtual_ip": "192.168.122.2/24"}}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "nfs.cephfs", "service_type": "nfs", "spec": {"enable_haproxy_protocol": true, "port": 12049}}, {"placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "prometheus", "service_type": "prometheus"}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Dec  1 04:50:46 np0005540825 systemd[1]: libpod-d30289115c6d4b5418ebd52a9e42459e7f661f6abc23be2766aaecd91c18643a.scope: Deactivated successfully.
Dec  1 04:50:46 np0005540825 podman[94044]: 2025-12-01 09:50:46.347813278 +0000 UTC m=+0.563909464 container died d30289115c6d4b5418ebd52a9e42459e7f661f6abc23be2766aaecd91c18643a (image=quay.io/ceph/ceph:v19, name=kind_roentgen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec  1 04:50:46 np0005540825 systemd[1]: var-lib-containers-storage-overlay-c1d6b4bee0395cc138551328e318e813396659f92d2083dfa8ff16e0d07fb1f5-merged.mount: Deactivated successfully.
Dec  1 04:50:46 np0005540825 podman[94044]: 2025-12-01 09:50:46.397224803 +0000 UTC m=+0.613320999 container remove d30289115c6d4b5418ebd52a9e42459e7f661f6abc23be2766aaecd91c18643a (image=quay.io/ceph/ceph:v19, name=kind_roentgen, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  1 04:50:46 np0005540825 systemd[1]: libpod-conmon-d30289115c6d4b5418ebd52a9e42459e7f661f6abc23be2766aaecd91c18643a.scope: Deactivated successfully.
Dec  1 04:50:46 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  1 04:50:46 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:46 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  1 04:50:46 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:46 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec  1 04:50:46 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:46 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.alkudt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec  1 04:50:46 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.alkudt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  1 04:50:46 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.alkudt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  1 04:50:46 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Dec  1 04:50:46 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:46 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:50:46 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:50:46 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.alkudt on compute-1
Dec  1 04:50:46 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.alkudt on compute-1
Dec  1 04:50:46 np0005540825 ceph-mon[74416]: Deploying daemon rgw.rgw.compute-2.ugomkp on compute-2
Dec  1 04:50:46 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:46 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:46 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:46 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.alkudt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  1 04:50:46 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.alkudt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  1 04:50:46 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:46 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Dec  1 04:50:46 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Dec  1 04:50:46 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Dec  1 04:50:46 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Dec  1 04:50:46 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.ugomkp' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec  1 04:50:47 np0005540825 python3[94124]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:50:47 np0005540825 podman[94125]: 2025-12-01 09:50:47.511365088 +0000 UTC m=+0.053485774 container create 44f8cf48008530f025f03ba08c55fa6ab7a263ece2ad7d0af8b5ec3ef935ae08 (image=quay.io/ceph/ceph:v19, name=hardcore_hermann, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  1 04:50:47 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v22: 195 pgs: 1 unknown, 194 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  1 04:50:47 np0005540825 systemd[1]: Started libpod-conmon-44f8cf48008530f025f03ba08c55fa6ab7a263ece2ad7d0af8b5ec3ef935ae08.scope.
Dec  1 04:50:47 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:50:47 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63a95fad8bb9dae60557c5a14899c891606aa84ec5b7ce1ff154f5f9f06aa894/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:47 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63a95fad8bb9dae60557c5a14899c891606aa84ec5b7ce1ff154f5f9f06aa894/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:47 np0005540825 podman[94125]: 2025-12-01 09:50:47.490744773 +0000 UTC m=+0.032865449 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:50:47 np0005540825 podman[94125]: 2025-12-01 09:50:47.599235178 +0000 UTC m=+0.141355874 container init 44f8cf48008530f025f03ba08c55fa6ab7a263ece2ad7d0af8b5ec3ef935ae08 (image=quay.io/ceph/ceph:v19, name=hardcore_hermann, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325)
Dec  1 04:50:47 np0005540825 podman[94125]: 2025-12-01 09:50:47.603885741 +0000 UTC m=+0.146006387 container start 44f8cf48008530f025f03ba08c55fa6ab7a263ece2ad7d0af8b5ec3ef935ae08 (image=quay.io/ceph/ceph:v19, name=hardcore_hermann, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec  1 04:50:47 np0005540825 podman[94125]: 2025-12-01 09:50:47.60760727 +0000 UTC m=+0.149727916 container attach 44f8cf48008530f025f03ba08c55fa6ab7a263ece2ad7d0af8b5ec3ef935ae08 (image=quay.io/ceph/ceph:v19, name=hardcore_hermann, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  1 04:50:47 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.14484 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  1 04:50:47 np0005540825 hardcore_hermann[94140]: 
Dec  1 04:50:47 np0005540825 hardcore_hermann[94140]: [{"container_id": "845bc98e981e", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.15%", "created": "2025-12-01T09:47:33.939904Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-01T09:50:21.857565Z", "memory_usage": 7790919, "ports": [], "service_name": "crash", "started": "2025-12-01T09:47:33.795058Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@crash.compute-0", "version": "19.2.3"}, {"container_id": "7c618b1a07db", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.32%", "created": "2025-12-01T09:48:12.410017Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-12-01T09:50:21.336067Z", "memory_usage": 7812939, "ports": [], "service_name": "crash", "started": "2025-12-01T09:48:12.294057Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@crash.compute-1", "version": "19.2.3"}, {"container_id": "7ddc5516224a", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.38%", "created": "2025-12-01T09:49:35.973775Z", "daemon_id": "compute-2", "daemon_name": "crash.compute-2", "daemon_type": "crash", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-12-01T09:50:21.623944Z", "memory_usage": 7812939, "ports": [], "service_name": "crash", "started": "2025-12-01T09:49:35.876044Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@crash.compute-2", "version": "19.2.3"}, {"container_id": "47856f96919c", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "25.73%", "created": "2025-12-01T09:46:54.916888Z", "daemon_id": "compute-0.fospow", "daemon_name": "mgr.compute-0.fospow", "daemon_type": "mgr", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-01T09:50:21.857465Z", "memory_usage": 541799219, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-12-01T09:46:54.795014Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@mgr.compute-0.fospow", "version": "19.2.3"}, {"container_id": "39c0da9ad64f", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "54.34%", "created": "2025-12-01T09:49:31.641625Z", "daemon_id": "compute-1.ymizfm", "daemon_name": "mgr.compute-1.ymizfm", "daemon_type": "mgr", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-12-01T09:50:21.336337Z", "memory_usage": 505413632, "ports": [8765], "service_name": "mgr", "started": "2025-12-01T09:49:31.543587Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@mgr.compute-1.ymizfm", "version": "19.2.3"}, {"container_id": "00006d9f2ff7", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "49.79%", "created": "2025-12-01T09:49:24.355498Z", "daemon_id": "compute-2.kdtkls", "daemon_name": "mgr.compute-2.kdtkls", "daemon_type": "mgr", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-12-01T09:50:21.623842Z", "memory_usage": 447846809, "ports": [8765], "service_name": "mgr", "started": "2025-12-01T09:49:24.258289Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@mgr.compute-2.kdtkls", "version": "19.2.3"}, {"container_id": "04e54403a63b", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "2.63%", "created": "2025-12-01T09:46:50.379768Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-01T09:50:21.857344Z", "memory_request": 2147483648, "memory_usage": 57608765, "ports": [], "service_name": "mon", "started": "2025-12-01T09:46:52.729533Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@mon.compute-0", "version": "19.2.3"}, {"container_id": "7505fa15a86e", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.62%", "created": "2025-12-01T09:49:19.469299Z", "daemon_id": "compute-1", "daemon_name": "mon.compute-1", "daemon_type": "mon", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-12-01T09:50:21.336240Z", "memory_request": 2147483648, "memory_usage": 42939187, "ports": [], "service_name": "mon", "started": "2025-12-01T09:49:19.348889Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@mon.compute-1", "version": "19.2.3"}, {"container_id": "51d0f56cc34d", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.95%", "created": "2025-12-01T09:49:17.270328Z", "daemon_id": "compute-2", "daemon_name": "mon.compute-2", "daemon_type": "mon", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-12-01T09:50:21.623713Z", "memory_request": 2147483648, "memory_usage": 42341498, "ports": [], "service_name": "mon", "started": "2025-12-01T09:49:17.187139Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@mon.compute-2", "version": "19.2.3"}, {"container_id": "cd3077bd2d5a", "container_image_digests": ["quay.io/prometheus/node-exporter@sha256:4cb2b9019f1757be8482419002cb7afe028fdba35d47958829e4cfeaf6246d80"
Dec  1 04:50:48 np0005540825 systemd[1]: libpod-44f8cf48008530f025f03ba08c55fa6ab7a263ece2ad7d0af8b5ec3ef935ae08.scope: Deactivated successfully.
Dec  1 04:50:48 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Dec  1 04:50:48 np0005540825 ceph-mon[74416]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  1 04:50:48 np0005540825 podman[94165]: 2025-12-01 09:50:48.045712909 +0000 UTC m=+0.031189105 container died 44f8cf48008530f025f03ba08c55fa6ab7a263ece2ad7d0af8b5ec3ef935ae08 (image=quay.io/ceph/ceph:v19, name=hardcore_hermann, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:50:48 np0005540825 ceph-mon[74416]: Deploying daemon rgw.rgw.compute-1.alkudt on compute-1
Dec  1 04:50:48 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.102:0/1702895159' entity='client.rgw.rgw.compute-2.ugomkp' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec  1 04:50:48 np0005540825 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 04:50:48 np0005540825 ceph-mon[74416]: from='client.? ' entity='client.rgw.rgw.compute-2.ugomkp' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec  1 04:50:48 np0005540825 rsyslogd[1006]: message too long (12734) with configured size 8096, begin of message is: [{"container_id": "845bc98e981e", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  1 04:50:48 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.ugomkp' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec  1 04:50:48 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Dec  1 04:50:48 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Dec  1 04:50:48 np0005540825 systemd[1]: var-lib-containers-storage-overlay-63a95fad8bb9dae60557c5a14899c891606aa84ec5b7ce1ff154f5f9f06aa894-merged.mount: Deactivated successfully.
Dec  1 04:50:48 np0005540825 podman[94165]: 2025-12-01 09:50:48.207985825 +0000 UTC m=+0.193461951 container remove 44f8cf48008530f025f03ba08c55fa6ab7a263ece2ad7d0af8b5ec3ef935ae08 (image=quay.io/ceph/ceph:v19, name=hardcore_hermann, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  1 04:50:48 np0005540825 systemd[1]: libpod-conmon-44f8cf48008530f025f03ba08c55fa6ab7a263ece2ad7d0af8b5ec3ef935ae08.scope: Deactivated successfully.
Dec  1 04:50:48 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  1 04:50:48 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:48 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  1 04:50:48 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:48 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec  1 04:50:48 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:48 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.mxrshg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec  1 04:50:48 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.mxrshg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  1 04:50:48 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.mxrshg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  1 04:50:48 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Dec  1 04:50:48 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:48 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:50:48 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:50:48 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.mxrshg on compute-0
Dec  1 04:50:48 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.mxrshg on compute-0
Dec  1 04:50:49 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Dec  1 04:50:49 np0005540825 ceph-mon[74416]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  1 04:50:49 np0005540825 ceph-mon[74416]: from='client.? ' entity='client.rgw.rgw.compute-2.ugomkp' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec  1 04:50:49 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:49 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:49 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:49 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.mxrshg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  1 04:50:49 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.mxrshg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  1 04:50:49 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:49 np0005540825 ceph-mon[74416]: Deploying daemon rgw.rgw.compute-0.mxrshg on compute-0
Dec  1 04:50:49 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Dec  1 04:50:49 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Dec  1 04:50:49 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 50 pg[10.0( empty local-lis/les=0/0 n=0 ec=50/50 lis/c=0/0 les/c/f=0/0/0 sis=50) [1] r=0 lpr=50 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:50:49 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Dec  1 04:50:49 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.ugomkp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec  1 04:50:49 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Dec  1 04:50:49 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.alkudt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec  1 04:50:49 np0005540825 python3[94274]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:50:49 np0005540825 podman[94301]: 2025-12-01 09:50:49.312244418 +0000 UTC m=+0.053958956 container create c76df96856c681ec0b41243809cf06ad28733cfca0384c57acbe5c6e5c54ef62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:50:49 np0005540825 podman[94313]: 2025-12-01 09:50:49.343015051 +0000 UTC m=+0.054434279 container create a25a8fd6d8acae257f8e395c70c0ad81f015d5e1f4109288a1dd7ae2d6010325 (image=quay.io/ceph/ceph:v19, name=serene_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 04:50:49 np0005540825 systemd[1]: Started libpod-conmon-c76df96856c681ec0b41243809cf06ad28733cfca0384c57acbe5c6e5c54ef62.scope.
Dec  1 04:50:49 np0005540825 systemd[1]: Started libpod-conmon-a25a8fd6d8acae257f8e395c70c0ad81f015d5e1f4109288a1dd7ae2d6010325.scope.
Dec  1 04:50:49 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:50:49 np0005540825 podman[94301]: 2025-12-01 09:50:49.288061789 +0000 UTC m=+0.029776347 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:50:49 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:50:49 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e34f1af2cdfed55e7a4893b8bc4a50196f04cc9651e4ba1beba69868bd15590f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:49 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e34f1af2cdfed55e7a4893b8bc4a50196f04cc9651e4ba1beba69868bd15590f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:49 np0005540825 podman[94301]: 2025-12-01 09:50:49.402768859 +0000 UTC m=+0.144483417 container init c76df96856c681ec0b41243809cf06ad28733cfca0384c57acbe5c6e5c54ef62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_almeida, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:50:49 np0005540825 podman[94313]: 2025-12-01 09:50:49.404842614 +0000 UTC m=+0.116261862 container init a25a8fd6d8acae257f8e395c70c0ad81f015d5e1f4109288a1dd7ae2d6010325 (image=quay.io/ceph/ceph:v19, name=serene_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  1 04:50:49 np0005540825 podman[94313]: 2025-12-01 09:50:49.314853187 +0000 UTC m=+0.026272435 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:50:49 np0005540825 podman[94301]: 2025-12-01 09:50:49.410820311 +0000 UTC m=+0.152534849 container start c76df96856c681ec0b41243809cf06ad28733cfca0384c57acbe5c6e5c54ef62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  1 04:50:49 np0005540825 podman[94313]: 2025-12-01 09:50:49.412193368 +0000 UTC m=+0.123612596 container start a25a8fd6d8acae257f8e395c70c0ad81f015d5e1f4109288a1dd7ae2d6010325 (image=quay.io/ceph/ceph:v19, name=serene_nightingale, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid)
Dec  1 04:50:49 np0005540825 upbeat_almeida[94333]: 167 167
Dec  1 04:50:49 np0005540825 podman[94301]: 2025-12-01 09:50:49.415031163 +0000 UTC m=+0.156745741 container attach c76df96856c681ec0b41243809cf06ad28733cfca0384c57acbe5c6e5c54ef62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_almeida, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec  1 04:50:49 np0005540825 systemd[1]: libpod-c76df96856c681ec0b41243809cf06ad28733cfca0384c57acbe5c6e5c54ef62.scope: Deactivated successfully.
Dec  1 04:50:49 np0005540825 podman[94313]: 2025-12-01 09:50:49.417623721 +0000 UTC m=+0.129042969 container attach a25a8fd6d8acae257f8e395c70c0ad81f015d5e1f4109288a1dd7ae2d6010325 (image=quay.io/ceph/ceph:v19, name=serene_nightingale, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:50:49 np0005540825 podman[94301]: 2025-12-01 09:50:49.419230134 +0000 UTC m=+0.160944672 container died c76df96856c681ec0b41243809cf06ad28733cfca0384c57acbe5c6e5c54ef62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:50:49 np0005540825 systemd[1]: var-lib-containers-storage-overlay-4b716fa1a70e9db1b9a4433b906045a438144f6f77ab3177ee393ec2b8f860fc-merged.mount: Deactivated successfully.
Dec  1 04:50:49 np0005540825 podman[94301]: 2025-12-01 09:50:49.456605881 +0000 UTC m=+0.198320439 container remove c76df96856c681ec0b41243809cf06ad28733cfca0384c57acbe5c6e5c54ef62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:50:49 np0005540825 systemd[1]: libpod-conmon-c76df96856c681ec0b41243809cf06ad28733cfca0384c57acbe5c6e5c54ef62.scope: Deactivated successfully.
Dec  1 04:50:49 np0005540825 systemd[1]: Reloading.
Dec  1 04:50:49 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v25: 196 pgs: 2 unknown, 194 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  1 04:50:49 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:50:49 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:50:49 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:50:49 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:50:49 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:50:49 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:50:49 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:50:49 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:50:49 np0005540825 ceph-mgr[74709]: [progress WARNING root] Starting Global Recovery Event,2 pgs not in active + clean state
Dec  1 04:50:49 np0005540825 systemd[1]: Reloading.
Dec  1 04:50:49 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec  1 04:50:49 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3400550637' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  1 04:50:49 np0005540825 serene_nightingale[94335]: 
Dec  1 04:50:49 np0005540825 serene_nightingale[94335]: {"fsid":"365f19c2-81e5-5edd-b6b4-280555214d3a","health":{"status":"HEALTH_ERR","checks":{"BLUESTORE_SLOW_OP_ALERT":{"severity":"HEALTH_WARN","summary":{"message":"1 OSD(s) experiencing slow operations in BlueStore","count":1},"muted":false},"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false},"POOL_APP_NOT_ENABLED":{"severity":"HEALTH_WARN","summary":{"message":"1 pool(s) do not have an application enabled","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":80,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":50,"num_osds":3,"num_up_osds":3,"osd_up_since":1764582605,"num_in_osds":3,"osd_in_since":1764582579,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":194},{"state_name":"unknown","count":1}],"num_pgs":195,"num_pools":9,"num_objects":3,"data_bytes":459280,"bytes_used":84402176,"bytes_avail":64327524352,"bytes_total":64411926528,"unknown_pgs_ratio":0.0051282052882015705},"fsmap":{"epoch":2,"btime":"2025-12-01T09:50:20:704588+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":3,"modified":"2025-12-01T09:49:34.718813+0000","services":{"mon":{"daemons":{"summary":"","compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"392e4f84-981d-4c70-8f89-7ae66a34c6f7":{"message":"Updating rgw.rgw deployment (+3 -> 3) (1s)\n      [=========...................] (remaining: 3s)","progress":0.3333333432674408,"add_to_ceph_s":true}}}
Dec  1 04:50:49 np0005540825 podman[94313]: 2025-12-01 09:50:49.87574219 +0000 UTC m=+0.587161418 container died a25a8fd6d8acae257f8e395c70c0ad81f015d5e1f4109288a1dd7ae2d6010325 (image=quay.io/ceph/ceph:v19, name=serene_nightingale, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:50:49 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:50:49 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:50:50 np0005540825 systemd[1]: libpod-a25a8fd6d8acae257f8e395c70c0ad81f015d5e1f4109288a1dd7ae2d6010325.scope: Deactivated successfully.
Dec  1 04:50:50 np0005540825 systemd[1]: var-lib-containers-storage-overlay-e34f1af2cdfed55e7a4893b8bc4a50196f04cc9651e4ba1beba69868bd15590f-merged.mount: Deactivated successfully.
Dec  1 04:50:50 np0005540825 podman[94416]: 2025-12-01 09:50:50.130596101 +0000 UTC m=+0.237438572 container remove a25a8fd6d8acae257f8e395c70c0ad81f015d5e1f4109288a1dd7ae2d6010325 (image=quay.io/ceph/ceph:v19, name=serene_nightingale, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:50:50 np0005540825 systemd[1]: Starting Ceph rgw.rgw.compute-0.mxrshg for 365f19c2-81e5-5edd-b6b4-280555214d3a...
Dec  1 04:50:50 np0005540825 systemd[1]: libpod-conmon-a25a8fd6d8acae257f8e395c70c0ad81f015d5e1f4109288a1dd7ae2d6010325.scope: Deactivated successfully.
Dec  1 04:50:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Dec  1 04:50:50 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.102:0/4186493149' entity='client.rgw.rgw.compute-2.ugomkp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec  1 04:50:50 np0005540825 ceph-mon[74416]: from='client.? ' entity='client.rgw.rgw.compute-2.ugomkp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec  1 04:50:50 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.101:0/603108535' entity='client.rgw.rgw.compute-1.alkudt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec  1 04:50:50 np0005540825 ceph-mon[74416]: from='client.? ' entity='client.rgw.rgw.compute-1.alkudt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec  1 04:50:50 np0005540825 podman[94516]: 2025-12-01 09:50:50.402858841 +0000 UTC m=+0.053714589 container create 5ffef9f519f13d462d420a0c8aee7009771dbef2cc02df1a757c8ed6a6507f1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-rgw-rgw-compute-0-mxrshg, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  1 04:50:50 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.ugomkp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec  1 04:50:50 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.alkudt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec  1 04:50:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Dec  1 04:50:50 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Dec  1 04:50:50 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 51 pg[10.0( empty local-lis/les=50/51 n=0 ec=50/50 lis/c=0/0 les/c/f=0/0/0 sis=50) [1] r=0 lpr=50 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:50:50 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c73e7da448856152a9f4237bb9968416f3b58d1329c126af8f3210bad904980/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:50 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c73e7da448856152a9f4237bb9968416f3b58d1329c126af8f3210bad904980/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:50 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c73e7da448856152a9f4237bb9968416f3b58d1329c126af8f3210bad904980/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:50 np0005540825 podman[94516]: 2025-12-01 09:50:50.376101065 +0000 UTC m=+0.026956883 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:50:50 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c73e7da448856152a9f4237bb9968416f3b58d1329c126af8f3210bad904980/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.mxrshg supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:50 np0005540825 podman[94516]: 2025-12-01 09:50:50.485134964 +0000 UTC m=+0.135990812 container init 5ffef9f519f13d462d420a0c8aee7009771dbef2cc02df1a757c8ed6a6507f1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-rgw-rgw-compute-0-mxrshg, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:50:50 np0005540825 podman[94516]: 2025-12-01 09:50:50.493611648 +0000 UTC m=+0.144467386 container start 5ffef9f519f13d462d420a0c8aee7009771dbef2cc02df1a757c8ed6a6507f1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-rgw-rgw-compute-0-mxrshg, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  1 04:50:50 np0005540825 bash[94516]: 5ffef9f519f13d462d420a0c8aee7009771dbef2cc02df1a757c8ed6a6507f1b
Dec  1 04:50:50 np0005540825 systemd[1]: Started Ceph rgw.rgw.compute-0.mxrshg for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 04:50:50 np0005540825 radosgw[94538]: deferred set uid:gid to 167:167 (ceph:ceph)
Dec  1 04:50:50 np0005540825 radosgw[94538]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Dec  1 04:50:50 np0005540825 radosgw[94538]: framework: beast
Dec  1 04:50:50 np0005540825 radosgw[94538]: framework conf key: endpoint, val: 192.168.122.100:8082
Dec  1 04:50:50 np0005540825 radosgw[94538]: init_numa not setting numa affinity
Dec  1 04:50:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:50:51 np0005540825 python3[95152]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:50:51 np0005540825 podman[95153]: 2025-12-01 09:50:51.205826778 +0000 UTC m=+0.053017072 container create de1cf797cb943f0b86c868336728af48e8b413c2818ebc5cc2a0bb5e88f18c9e (image=quay.io/ceph/ceph:v19, name=jovial_kowalevski, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:50:51 np0005540825 systemd[1]: Started libpod-conmon-de1cf797cb943f0b86c868336728af48e8b413c2818ebc5cc2a0bb5e88f18c9e.scope.
Dec  1 04:50:51 np0005540825 podman[95153]: 2025-12-01 09:50:51.183791336 +0000 UTC m=+0.030981620 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:50:51 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:50:51 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c105b104de724d5c2d023aece7394a686fec00db3944d4e81b9f06f47e557e8f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:51 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c105b104de724d5c2d023aece7394a686fec00db3944d4e81b9f06f47e557e8f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:51 np0005540825 podman[95153]: 2025-12-01 09:50:51.306524277 +0000 UTC m=+0.153714631 container init de1cf797cb943f0b86c868336728af48e8b413c2818ebc5cc2a0bb5e88f18c9e (image=quay.io/ceph/ceph:v19, name=jovial_kowalevski, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:50:51 np0005540825 podman[95153]: 2025-12-01 09:50:51.314520938 +0000 UTC m=+0.161711202 container start de1cf797cb943f0b86c868336728af48e8b413c2818ebc5cc2a0bb5e88f18c9e (image=quay.io/ceph/ceph:v19, name=jovial_kowalevski, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:50:51 np0005540825 podman[95153]: 2025-12-01 09:50:51.319196842 +0000 UTC m=+0.166387186 container attach de1cf797cb943f0b86c868336728af48e8b413c2818ebc5cc2a0bb5e88f18c9e (image=quay.io/ceph/ceph:v19, name=jovial_kowalevski, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec  1 04:50:51 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Dec  1 04:50:51 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v27: 196 pgs: 196 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 1.3 KiB/s wr, 7 op/s
Dec  1 04:50:51 np0005540825 ceph-mon[74416]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec  1 04:50:51 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec  1 04:50:51 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4018065374' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  1 04:50:51 np0005540825 jovial_kowalevski[95169]: 
Dec  1 04:50:51 np0005540825 jovial_kowalevski[95169]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ALERTMANAGER_API_HOST","value":"http://192.168.122.100:9093","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_PASSWORD","value":"/home/grafana_password.yml","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_URL","value":"http://192.168.122.100:3100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_USERNAME","value":"admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/PROMETHEUS_API_HOST","value":"http://192.168.122.100:9092","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-0.fospow/server_addr","value":"192.168.122.100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-1.ymizfm/server_addr","value":"192.168.122.101","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-2.kdtkls/server_addr","value":"192.168.122.102","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl_server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target","value":"5502929715","level":"basic","can_update_at_runtime":true,"mask":"host:compute-1","location_type":"host","location_value":"compute-1"},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.mxrshg","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-1.alkudt","name":"rgw_frontends","value":"beast endpoint=192.168.122.101:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-2.ugomkp","name":"rgw_frontends","value":"beast endpoint=192.168.122.102:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Dec  1 04:50:51 np0005540825 systemd[1]: libpod-de1cf797cb943f0b86c868336728af48e8b413c2818ebc5cc2a0bb5e88f18c9e.scope: Deactivated successfully.
Dec  1 04:50:51 np0005540825 podman[95153]: 2025-12-01 09:50:51.734782326 +0000 UTC m=+0.581972590 container died de1cf797cb943f0b86c868336728af48e8b413c2818ebc5cc2a0bb5e88f18c9e (image=quay.io/ceph/ceph:v19, name=jovial_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 04:50:52 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:52 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:50:52 np0005540825 systemd[1]: var-lib-containers-storage-overlay-c105b104de724d5c2d023aece7394a686fec00db3944d4e81b9f06f47e557e8f-merged.mount: Deactivated successfully.
Dec  1 04:50:52 np0005540825 podman[95153]: 2025-12-01 09:50:52.153996658 +0000 UTC m=+1.001186922 container remove de1cf797cb943f0b86c868336728af48e8b413c2818ebc5cc2a0bb5e88f18c9e (image=quay.io/ceph/ceph:v19, name=jovial_kowalevski, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  1 04:50:52 np0005540825 systemd[1]: libpod-conmon-de1cf797cb943f0b86c868336728af48e8b413c2818ebc5cc2a0bb5e88f18c9e.scope: Deactivated successfully.
Dec  1 04:50:52 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Dec  1 04:50:52 np0005540825 ceph-mon[74416]: from='client.? ' entity='client.rgw.rgw.compute-2.ugomkp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec  1 04:50:52 np0005540825 ceph-mon[74416]: from='client.? ' entity='client.rgw.rgw.compute-1.alkudt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec  1 04:50:52 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Dec  1 04:50:52 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Dec  1 04:50:52 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.alkudt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  1 04:50:52 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Dec  1 04:50:52 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4125115031' entity='client.rgw.rgw.compute-0.mxrshg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  1 04:50:52 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:52 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec  1 04:50:52 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:52 np0005540825 ceph-mgr[74709]: [progress INFO root] complete: finished ev 392e4f84-981d-4c70-8f89-7ae66a34c6f7 (Updating rgw.rgw deployment (+3 -> 3))
Dec  1 04:50:52 np0005540825 ceph-mgr[74709]: [progress INFO root] Completed event 392e4f84-981d-4c70-8f89-7ae66a34c6f7 (Updating rgw.rgw deployment (+3 -> 3)) in 8 seconds
Dec  1 04:50:52 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec  1 04:50:52 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec  1 04:50:52 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec  1 04:50:52 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:52 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec  1 04:50:52 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:52 np0005540825 ceph-mgr[74709]: [progress INFO root] update: starting ev 6440dccf-ef58-4192-9475-cd7da111d5d8 (Updating mds.cephfs deployment (+3 -> 3))
Dec  1 04:50:52 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.yoegjc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec  1 04:50:52 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.yoegjc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec  1 04:50:52 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.yoegjc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  1 04:50:52 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:50:52 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:50:52 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.yoegjc on compute-2
Dec  1 04:50:52 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.yoegjc on compute-2
Dec  1 04:50:53 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:50:53 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Dec  1 04:50:53 np0005540825 ceph-mon[74416]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec  1 04:50:53 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:53 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.101:0/603108535' entity='client.rgw.rgw.compute-1.alkudt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  1 04:50:53 np0005540825 ceph-mon[74416]: from='client.? ' entity='client.rgw.rgw.compute-1.alkudt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  1 04:50:53 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/4125115031' entity='client.rgw.rgw.compute-0.mxrshg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  1 04:50:53 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:53 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:53 np0005540825 ceph-mon[74416]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec  1 04:50:53 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:53 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:53 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.yoegjc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec  1 04:50:53 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.yoegjc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  1 04:50:53 np0005540825 ceph-mon[74416]: Deploying daemon mds.cephfs.compute-2.yoegjc on compute-2
Dec  1 04:50:53 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.alkudt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec  1 04:50:53 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4125115031' entity='client.rgw.rgw.compute-0.mxrshg' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec  1 04:50:53 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Dec  1 04:50:53 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Dec  1 04:50:53 np0005540825 python3[95233]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:50:53 np0005540825 podman[95234]: 2025-12-01 09:50:53.502246135 +0000 UTC m=+0.082330935 container create 779ab4d9c99a272cfa33f1c26ab65babbc285dc94a3d95fe55fa43356f12b687 (image=quay.io/ceph/ceph:v19, name=vigilant_goldstine, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec  1 04:50:53 np0005540825 systemd[1]: Started libpod-conmon-779ab4d9c99a272cfa33f1c26ab65babbc285dc94a3d95fe55fa43356f12b687.scope.
Dec  1 04:50:53 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v30: 197 pgs: 1 unknown, 196 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 5.6 KiB/s rd, 1.4 KiB/s wr, 7 op/s
Dec  1 04:50:53 np0005540825 podman[95234]: 2025-12-01 09:50:53.469019938 +0000 UTC m=+0.049104828 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:50:53 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:50:53 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ad6612e16c7495ec472e11ba81e863af9385296abb5d11b462b320c5370c68e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:53 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ad6612e16c7495ec472e11ba81e863af9385296abb5d11b462b320c5370c68e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:53 np0005540825 podman[95234]: 2025-12-01 09:50:53.578634473 +0000 UTC m=+0.158719373 container init 779ab4d9c99a272cfa33f1c26ab65babbc285dc94a3d95fe55fa43356f12b687 (image=quay.io/ceph/ceph:v19, name=vigilant_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:50:53 np0005540825 podman[95234]: 2025-12-01 09:50:53.591223915 +0000 UTC m=+0.171308755 container start 779ab4d9c99a272cfa33f1c26ab65babbc285dc94a3d95fe55fa43356f12b687 (image=quay.io/ceph/ceph:v19, name=vigilant_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:50:53 np0005540825 podman[95234]: 2025-12-01 09:50:53.595843407 +0000 UTC m=+0.175928237 container attach 779ab4d9c99a272cfa33f1c26ab65babbc285dc94a3d95fe55fa43356f12b687 (image=quay.io/ceph/ceph:v19, name=vigilant_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  1 04:50:53 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Dec  1 04:50:53 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4217549561' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Dec  1 04:50:53 np0005540825 vigilant_goldstine[95254]: mimic
Dec  1 04:50:53 np0005540825 systemd[1]: libpod-779ab4d9c99a272cfa33f1c26ab65babbc285dc94a3d95fe55fa43356f12b687.scope: Deactivated successfully.
Dec  1 04:50:53 np0005540825 podman[95234]: 2025-12-01 09:50:53.954091729 +0000 UTC m=+0.534176549 container died 779ab4d9c99a272cfa33f1c26ab65babbc285dc94a3d95fe55fa43356f12b687 (image=quay.io/ceph/ceph:v19, name=vigilant_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:50:53 np0005540825 systemd[1]: var-lib-containers-storage-overlay-2ad6612e16c7495ec472e11ba81e863af9385296abb5d11b462b320c5370c68e-merged.mount: Deactivated successfully.
Dec  1 04:50:53 np0005540825 podman[95234]: 2025-12-01 09:50:53.998762218 +0000 UTC m=+0.578847018 container remove 779ab4d9c99a272cfa33f1c26ab65babbc285dc94a3d95fe55fa43356f12b687 (image=quay.io/ceph/ceph:v19, name=vigilant_goldstine, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:50:54 np0005540825 systemd[1]: libpod-conmon-779ab4d9c99a272cfa33f1c26ab65babbc285dc94a3d95fe55fa43356f12b687.scope: Deactivated successfully.
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.xijran", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.xijran", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.xijran", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:50:54 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.xijran on compute-0
Dec  1 04:50:54 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.xijran on compute-0
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).mds e3 new map
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).mds e3 print_map#012e3#012btime 2025-12-01T09:50:54:337178+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-01T09:50:20.704523+0000#012modified#0112025-12-01T09:50:20.704523+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-2.yoegjc{-1:24223} state up:standby seq 1 addr [v2:192.168.122.102:6804/3260542897,v1:192.168.122.102:6805/3260542897] compat {c=[1],r=[1],i=[1fff]}]
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: from='client.? ' entity='client.rgw.rgw.compute-1.alkudt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/4125115031' entity='client.rgw.rgw.compute-0.mxrshg' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.xijran", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.xijran", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/3260542897,v1:192.168.122.102:6805/3260542897] up:boot
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.102:6804/3260542897,v1:192.168.122.102:6805/3260542897] as mds.0
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.yoegjc assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.yoegjc"} v 0)
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.yoegjc"}]: dispatch
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).mds e3 all = 0
Dec  1 04:50:54 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 54 pg[12.0( empty local-lis/les=0/0 n=0 ec=54/54 lis/c=0/0 les/c/f=0/0/0 sis=54) [1] r=0 lpr=54 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4125115031' entity='client.rgw.rgw.compute-0.mxrshg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).mds e4 new map
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).mds e4 print_map#012e4#012btime 2025-12-01T09:50:54:367365+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-01T09:50:20.704523+0000#012modified#0112025-12-01T09:50:54.367356+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24223}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-2.yoegjc{0:24223} state up:creating seq 1 addr [v2:192.168.122.102:6804/3260542897,v1:192.168.122.102:6805/3260542897] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.yoegjc=up:creating}
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.ugomkp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.alkudt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.yoegjc is now active in filesystem cephfs as rank 0
Dec  1 04:50:54 np0005540825 ceph-mgr[74709]: [progress INFO root] Writing back 15 completed events
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  1 04:50:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:54 np0005540825 podman[95387]: 2025-12-01 09:50:54.873029828 +0000 UTC m=+0.062391749 container create a64eb6401bfd3904bce7d08313c6e2fb6309ce3da0f0e835c6cff49ebdf1072e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_moser, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec  1 04:50:54 np0005540825 systemd[1]: Started libpod-conmon-a64eb6401bfd3904bce7d08313c6e2fb6309ce3da0f0e835c6cff49ebdf1072e.scope.
Dec  1 04:50:54 np0005540825 podman[95387]: 2025-12-01 09:50:54.841933447 +0000 UTC m=+0.031295438 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:50:54 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:50:54 np0005540825 podman[95387]: 2025-12-01 09:50:54.957059557 +0000 UTC m=+0.146421508 container init a64eb6401bfd3904bce7d08313c6e2fb6309ce3da0f0e835c6cff49ebdf1072e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_moser, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:50:54 np0005540825 podman[95387]: 2025-12-01 09:50:54.967269397 +0000 UTC m=+0.156631328 container start a64eb6401bfd3904bce7d08313c6e2fb6309ce3da0f0e835c6cff49ebdf1072e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_moser, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:50:54 np0005540825 podman[95387]: 2025-12-01 09:50:54.970960144 +0000 UTC m=+0.160322065 container attach a64eb6401bfd3904bce7d08313c6e2fb6309ce3da0f0e835c6cff49ebdf1072e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_moser, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 04:50:54 np0005540825 suspicious_moser[95429]: 167 167
Dec  1 04:50:54 np0005540825 systemd[1]: libpod-a64eb6401bfd3904bce7d08313c6e2fb6309ce3da0f0e835c6cff49ebdf1072e.scope: Deactivated successfully.
Dec  1 04:50:54 np0005540825 conmon[95429]: conmon a64eb6401bfd3904bce7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a64eb6401bfd3904bce7d08313c6e2fb6309ce3da0f0e835c6cff49ebdf1072e.scope/container/memory.events
Dec  1 04:50:55 np0005540825 python3[95426]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:50:55 np0005540825 podman[95434]: 2025-12-01 09:50:55.034832101 +0000 UTC m=+0.032739905 container died a64eb6401bfd3904bce7d08313c6e2fb6309ce3da0f0e835c6cff49ebdf1072e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_moser, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec  1 04:50:55 np0005540825 systemd[1]: var-lib-containers-storage-overlay-144e74b43c333b29d5343ff0ef0aa1b5c51bcadb2258c85ab3a6500e62864d59-merged.mount: Deactivated successfully.
Dec  1 04:50:55 np0005540825 podman[95434]: 2025-12-01 09:50:55.075245469 +0000 UTC m=+0.073153273 container remove a64eb6401bfd3904bce7d08313c6e2fb6309ce3da0f0e835c6cff49ebdf1072e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_moser, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  1 04:50:55 np0005540825 systemd[1]: libpod-conmon-a64eb6401bfd3904bce7d08313c6e2fb6309ce3da0f0e835c6cff49ebdf1072e.scope: Deactivated successfully.
Dec  1 04:50:55 np0005540825 podman[95440]: 2025-12-01 09:50:55.093891511 +0000 UTC m=+0.064152355 container create 4f854612ed591caba20135849093c513348a81e28b7d6d893013a708ae9029fb (image=quay.io/ceph/ceph:v19, name=vibrant_dubinsky, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:50:55 np0005540825 systemd[1]: Started libpod-conmon-4f854612ed591caba20135849093c513348a81e28b7d6d893013a708ae9029fb.scope.
Dec  1 04:50:55 np0005540825 systemd[1]: Reloading.
Dec  1 04:50:55 np0005540825 podman[95440]: 2025-12-01 09:50:55.065028759 +0000 UTC m=+0.035289623 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:50:55 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:50:55 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:50:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Dec  1 04:50:55 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:50:55 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/024aab9bd783b86f910cadb10eeb08c1d87d89255b5b2ac402fa1b123f6239c9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:55 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/024aab9bd783b86f910cadb10eeb08c1d87d89255b5b2ac402fa1b123f6239c9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:55 np0005540825 podman[95440]: 2025-12-01 09:50:55.404623616 +0000 UTC m=+0.374884500 container init 4f854612ed591caba20135849093c513348a81e28b7d6d893013a708ae9029fb (image=quay.io/ceph/ceph:v19, name=vibrant_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:50:55 np0005540825 podman[95440]: 2025-12-01 09:50:55.411827997 +0000 UTC m=+0.382088831 container start 4f854612ed591caba20135849093c513348a81e28b7d6d893013a708ae9029fb (image=quay.io/ceph/ceph:v19, name=vibrant_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  1 04:50:55 np0005540825 podman[95440]: 2025-12-01 09:50:55.415543975 +0000 UTC m=+0.385804809 container attach 4f854612ed591caba20135849093c513348a81e28b7d6d893013a708ae9029fb (image=quay.io/ceph/ceph:v19, name=vibrant_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  1 04:50:55 np0005540825 systemd[1]: Reloading.
Dec  1 04:50:55 np0005540825 ceph-mon[74416]: Deploying daemon mds.cephfs.compute-0.xijran on compute-0
Dec  1 04:50:55 np0005540825 ceph-mon[74416]: daemon mds.cephfs.compute-2.yoegjc assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec  1 04:50:55 np0005540825 ceph-mon[74416]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec  1 04:50:55 np0005540825 ceph-mon[74416]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec  1 04:50:55 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/4125115031' entity='client.rgw.rgw.compute-0.mxrshg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  1 04:50:55 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.101:0/603108535' entity='client.rgw.rgw.compute-1.alkudt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  1 04:50:55 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.102:0/4186493149' entity='client.rgw.rgw.compute-2.ugomkp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  1 04:50:55 np0005540825 ceph-mon[74416]: from='client.? ' entity='client.rgw.rgw.compute-2.ugomkp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  1 04:50:55 np0005540825 ceph-mon[74416]: from='client.? ' entity='client.rgw.rgw.compute-1.alkudt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  1 04:50:55 np0005540825 ceph-mon[74416]: daemon mds.cephfs.compute-2.yoegjc is now active in filesystem cephfs as rank 0
Dec  1 04:50:55 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:55 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:50:55 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:50:55 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v32: 198 pgs: 2 unknown, 196 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 1.2 KiB/s wr, 6 op/s
Dec  1 04:50:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).mds e5 new map
Dec  1 04:50:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).mds e5 print_map#012e5#012btime 2025-12-01T09:50:55:377739+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-01T09:50:20.704523+0000#012modified#0112025-12-01T09:50:55.377737+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24223}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 24223 members: 24223#012[mds.cephfs.compute-2.yoegjc{0:24223} state up:active seq 2 addr [v2:192.168.122.102:6804/3260542897,v1:192.168.122.102:6805/3260542897] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Dec  1 04:50:55 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4125115031' entity='client.rgw.rgw.compute-0.mxrshg' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec  1 04:50:55 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.ugomkp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec  1 04:50:55 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.alkudt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec  1 04:50:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Dec  1 04:50:55 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Dec  1 04:50:55 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/3260542897,v1:192.168.122.102:6805/3260542897] up:active
Dec  1 04:50:55 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.yoegjc=up:active}
Dec  1 04:50:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Dec  1 04:50:55 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4125115031' entity='client.rgw.rgw.compute-0.mxrshg' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  1 04:50:55 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 55 pg[12.0( empty local-lis/les=54/55 n=0 ec=54/54 lis/c=0/0 les/c/f=0/0/0 sis=54) [1] r=0 lpr=54 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:50:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Dec  1 04:50:55 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.alkudt' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  1 04:50:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Dec  1 04:50:55 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.ugomkp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  1 04:50:55 np0005540825 systemd[1]: Starting Ceph mds.cephfs.compute-0.xijran for 365f19c2-81e5-5edd-b6b4-280555214d3a...
Dec  1 04:50:55 np0005540825 podman[95615]: 2025-12-01 09:50:55.879118828 +0000 UTC m=+0.040004328 container create a907dfcfb5c78b2a4208d7d4d25887b402594639131b09e753e3c3253a597137 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mds-cephfs-compute-0-xijran, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:50:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Dec  1 04:50:55 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/167581482' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Dec  1 04:50:55 np0005540825 vibrant_dubinsky[95466]: 
Dec  1 04:50:55 np0005540825 systemd[1]: libpod-4f854612ed591caba20135849093c513348a81e28b7d6d893013a708ae9029fb.scope: Deactivated successfully.
Dec  1 04:50:55 np0005540825 vibrant_dubinsky[95466]: {"mon":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mgr":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"osd":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mds":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":1},"overall":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":10}}
Dec  1 04:50:55 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/181a81a9b406f87db045135be39cf55fabece3d5a8cf9c0e41d19fa76a3be560/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:55 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/181a81a9b406f87db045135be39cf55fabece3d5a8cf9c0e41d19fa76a3be560/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:55 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/181a81a9b406f87db045135be39cf55fabece3d5a8cf9c0e41d19fa76a3be560/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:55 np0005540825 podman[95440]: 2025-12-01 09:50:55.934543222 +0000 UTC m=+0.904804056 container died 4f854612ed591caba20135849093c513348a81e28b7d6d893013a708ae9029fb (image=quay.io/ceph/ceph:v19, name=vibrant_dubinsky, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  1 04:50:55 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/181a81a9b406f87db045135be39cf55fabece3d5a8cf9c0e41d19fa76a3be560/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.xijran supports timestamps until 2038 (0x7fffffff)
Dec  1 04:50:55 np0005540825 podman[95615]: 2025-12-01 09:50:55.948071409 +0000 UTC m=+0.108956959 container init a907dfcfb5c78b2a4208d7d4d25887b402594639131b09e753e3c3253a597137 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mds-cephfs-compute-0-xijran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:50:55 np0005540825 podman[95615]: 2025-12-01 09:50:55.859400617 +0000 UTC m=+0.020286137 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:50:56 np0005540825 podman[95615]: 2025-12-01 09:50:56.084085791 +0000 UTC m=+0.244971291 container start a907dfcfb5c78b2a4208d7d4d25887b402594639131b09e753e3c3253a597137 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mds-cephfs-compute-0-xijran, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  1 04:50:56 np0005540825 bash[95615]: a907dfcfb5c78b2a4208d7d4d25887b402594639131b09e753e3c3253a597137
Dec  1 04:50:56 np0005540825 systemd[1]: var-lib-containers-storage-overlay-024aab9bd783b86f910cadb10eeb08c1d87d89255b5b2ac402fa1b123f6239c9-merged.mount: Deactivated successfully.
Dec  1 04:50:56 np0005540825 systemd[1]: Started Ceph mds.cephfs.compute-0.xijran for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 04:50:56 np0005540825 podman[95440]: 2025-12-01 09:50:56.121686094 +0000 UTC m=+1.091946928 container remove 4f854612ed591caba20135849093c513348a81e28b7d6d893013a708ae9029fb (image=quay.io/ceph/ceph:v19, name=vibrant_dubinsky, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:50:56 np0005540825 systemd[1]: libpod-conmon-4f854612ed591caba20135849093c513348a81e28b7d6d893013a708ae9029fb.scope: Deactivated successfully.
Dec  1 04:50:56 np0005540825 ceph-mds[95644]: set uid:gid to 167:167 (ceph:ceph)
Dec  1 04:50:56 np0005540825 ceph-mds[95644]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Dec  1 04:50:56 np0005540825 ceph-mds[95644]: main not setting numa affinity
Dec  1 04:50:56 np0005540825 ceph-mds[95644]: pidfile_write: ignore empty --pid-file
Dec  1 04:50:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mds-cephfs-compute-0-xijran[95633]: starting mds.cephfs.compute-0.xijran at 
Dec  1 04:50:56 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:50:56 np0005540825 ceph-mds[95644]: mds.cephfs.compute-0.xijran Updating MDS map to version 5 from mon.0
Dec  1 04:50:56 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:56 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:50:56 np0005540825 ceph-mon[74416]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  1 04:50:56 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:56 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec  1 04:50:56 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:56 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ijlzoi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec  1 04:50:56 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ijlzoi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec  1 04:50:56 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ijlzoi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  1 04:50:56 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:50:56 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:50:56 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.ijlzoi on compute-1
Dec  1 04:50:56 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.ijlzoi on compute-1
Dec  1 04:50:56 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Dec  1 04:50:56 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4125115031' entity='client.rgw.rgw.compute-0.mxrshg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec  1 04:50:56 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.alkudt' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec  1 04:50:56 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.ugomkp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec  1 04:50:56 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Dec  1 04:50:56 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Dec  1 04:50:56 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).mds e6 new map
Dec  1 04:50:56 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).mds e6 print_map#012e6#012btime 2025-12-01T09:50:56:603484+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-01T09:50:20.704523+0000#012modified#0112025-12-01T09:50:55.377737+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24223}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 24223 members: 24223#012[mds.cephfs.compute-2.yoegjc{0:24223} state up:active seq 2 addr [v2:192.168.122.102:6804/3260542897,v1:192.168.122.102:6805/3260542897] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.xijran{-1:14532} state up:standby seq 1 addr [v2:192.168.122.100:6806/2856291086,v1:192.168.122.100:6807/2856291086] compat {c=[1],r=[1],i=[1fff]}]
Dec  1 04:50:56 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/4125115031' entity='client.rgw.rgw.compute-0.mxrshg' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec  1 04:50:56 np0005540825 ceph-mon[74416]: from='client.? ' entity='client.rgw.rgw.compute-2.ugomkp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec  1 04:50:56 np0005540825 ceph-mon[74416]: from='client.? ' entity='client.rgw.rgw.compute-1.alkudt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec  1 04:50:56 np0005540825 ceph-mds[95644]: mds.cephfs.compute-0.xijran Updating MDS map to version 6 from mon.0
Dec  1 04:50:56 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/4125115031' entity='client.rgw.rgw.compute-0.mxrshg' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  1 04:50:56 np0005540825 ceph-mds[95644]: mds.cephfs.compute-0.xijran Monitors have assigned me to become a standby
Dec  1 04:50:56 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.101:0/603108535' entity='client.rgw.rgw.compute-1.alkudt' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  1 04:50:56 np0005540825 ceph-mon[74416]: from='client.? ' entity='client.rgw.rgw.compute-1.alkudt' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  1 04:50:56 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.102:0/4186493149' entity='client.rgw.rgw.compute-2.ugomkp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  1 04:50:56 np0005540825 ceph-mon[74416]: from='client.? ' entity='client.rgw.rgw.compute-2.ugomkp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  1 04:50:56 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:56 np0005540825 ceph-mon[74416]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  1 04:50:56 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:56 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:56 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ijlzoi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec  1 04:50:56 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ijlzoi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  1 04:50:56 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/2856291086,v1:192.168.122.100:6807/2856291086] up:boot
Dec  1 04:50:56 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.yoegjc=up:active} 1 up:standby
Dec  1 04:50:56 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.xijran"} v 0)
Dec  1 04:50:56 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.xijran"}]: dispatch
Dec  1 04:50:56 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).mds e6 all = 0
Dec  1 04:50:56 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).mds e7 new map
Dec  1 04:50:56 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).mds e7 print_map#012e7#012btime 2025-12-01T09:50:56:889938+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-01T09:50:20.704523+0000#012modified#0112025-12-01T09:50:55.377737+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24223}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24223 members: 24223#012[mds.cephfs.compute-2.yoegjc{0:24223} state up:active seq 2 addr [v2:192.168.122.102:6804/3260542897,v1:192.168.122.102:6805/3260542897] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.xijran{-1:14532} state up:standby seq 1 addr [v2:192.168.122.100:6806/2856291086,v1:192.168.122.100:6807/2856291086] compat {c=[1],r=[1],i=[1fff]}]
Dec  1 04:50:56 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.yoegjc=up:active} 1 up:standby
Dec  1 04:50:57 np0005540825 radosgw[94538]: v1 topic migration: starting v1 topic migration..
Dec  1 04:50:57 np0005540825 radosgw[94538]: LDAP not started since no server URIs were provided in the configuration.
Dec  1 04:50:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-rgw-rgw-compute-0-mxrshg[94532]: 2025-12-01T09:50:57.088+0000 7fe10dc6f980 -1 LDAP not started since no server URIs were provided in the configuration.
Dec  1 04:50:57 np0005540825 radosgw[94538]: v1 topic migration: finished v1 topic migration
Dec  1 04:50:57 np0005540825 radosgw[94538]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Dec  1 04:50:57 np0005540825 radosgw[94538]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Dec  1 04:50:57 np0005540825 radosgw[94538]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Dec  1 04:50:57 np0005540825 radosgw[94538]: framework: beast
Dec  1 04:50:57 np0005540825 radosgw[94538]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Dec  1 04:50:57 np0005540825 radosgw[94538]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Dec  1 04:50:57 np0005540825 radosgw[94538]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Dec  1 04:50:57 np0005540825 radosgw[94538]: starting handler: beast
Dec  1 04:50:57 np0005540825 radosgw[94538]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Dec  1 04:50:57 np0005540825 radosgw[94538]: set uid:gid to 167:167 (ceph:ceph)
Dec  1 04:50:57 np0005540825 radosgw[94538]: mgrc service_daemon_register rgw.14508 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.mxrshg,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864324,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=a4b474d3-e1dd-44c2-9911-e36e5f368ef5,zone_name=default,zonegroup_id=079816e3-d8ce-476e-bcdd-2df39ad7439e,zonegroup_name=default}
Dec  1 04:50:57 np0005540825 radosgw[94538]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Dec  1 04:50:57 np0005540825 radosgw[94538]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Dec  1 04:50:57 np0005540825 radosgw[94538]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Dec  1 04:50:57 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v35: 198 pgs: 198 active+clean; 453 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 3.8 KiB/s wr, 15 op/s
Dec  1 04:50:58 np0005540825 ceph-mon[74416]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec  1 04:50:58 np0005540825 ceph-mon[74416]: Deploying daemon mds.cephfs.compute-1.ijlzoi on compute-1
Dec  1 04:50:58 np0005540825 ceph-mon[74416]: from='client.? 192.168.122.100:0/4125115031' entity='client.rgw.rgw.compute-0.mxrshg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec  1 04:50:58 np0005540825 ceph-mon[74416]: from='client.? ' entity='client.rgw.rgw.compute-1.alkudt' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec  1 04:50:58 np0005540825 ceph-mon[74416]: from='client.? ' entity='client.rgw.rgw.compute-2.ugomkp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec  1 04:50:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:50:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  1 04:50:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  1 04:50:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec  1 04:50:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:58 np0005540825 ceph-mgr[74709]: [progress INFO root] complete: finished ev 6440dccf-ef58-4192-9475-cd7da111d5d8 (Updating mds.cephfs deployment (+3 -> 3))
Dec  1 04:50:58 np0005540825 ceph-mgr[74709]: [progress INFO root] Completed event 6440dccf-ef58-4192-9475-cd7da111d5d8 (Updating mds.cephfs deployment (+3 -> 3)) in 6 seconds
Dec  1 04:50:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Dec  1 04:50:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec  1 04:50:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:58 np0005540825 ceph-mgr[74709]: [progress INFO root] update: starting ev b597e2af-8ba6-421f-aab8-779d09d44701 (Updating nfs.cephfs deployment (+3 -> 3))
Dec  1 04:50:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 04:50:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:58 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.osfnzc
Dec  1 04:50:58 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.osfnzc
Dec  1 04:50:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.osfnzc", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Dec  1 04:50:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.osfnzc", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec  1 04:50:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.osfnzc", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec  1 04:50:58 np0005540825 ceph-mgr[74709]: [cephadm INFO root] Ensuring nfs.cephfs.0 is in the ganesha grace table
Dec  1 04:50:58 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.0 is in the ganesha grace table
Dec  1 04:50:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Dec  1 04:50:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec  1 04:50:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec  1 04:50:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:50:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:50:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Dec  1 04:50:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec  1 04:50:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec  1 04:50:58 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Dec  1 04:50:58 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Dec  1 04:50:58 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.osfnzc-rgw
Dec  1 04:50:58 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.osfnzc-rgw
Dec  1 04:50:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.osfnzc-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec  1 04:50:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.osfnzc-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  1 04:50:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.osfnzc-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  1 04:50:58 np0005540825 ceph-mgr[74709]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.0.0.compute-1.osfnzc's ganesha conf is defaulting to empty
Dec  1 04:50:58 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.0.0.compute-1.osfnzc's ganesha conf is defaulting to empty
Dec  1 04:50:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:50:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:50:58 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.0.0.compute-1.osfnzc on compute-1
Dec  1 04:50:58 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.0.0.compute-1.osfnzc on compute-1
Dec  1 04:50:59 np0005540825 ceph-mon[74416]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec  1 04:50:59 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:59 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:59 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:59 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:59 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:59 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:59 np0005540825 ceph-mon[74416]: Creating key for client.nfs.cephfs.0.0.compute-1.osfnzc
Dec  1 04:50:59 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.osfnzc", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec  1 04:50:59 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.osfnzc", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec  1 04:50:59 np0005540825 ceph-mon[74416]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Dec  1 04:50:59 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec  1 04:50:59 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec  1 04:50:59 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec  1 04:50:59 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec  1 04:50:59 np0005540825 ceph-mon[74416]: Rados config object exists: conf-nfs.cephfs
Dec  1 04:50:59 np0005540825 ceph-mon[74416]: Creating key for client.nfs.cephfs.0.0.compute-1.osfnzc-rgw
Dec  1 04:50:59 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.osfnzc-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  1 04:50:59 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.osfnzc-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  1 04:50:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).mds e8 new map
Dec  1 04:50:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).mds e8 print_map#012e8#012btime 2025-12-01T09:50:59:122025+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-01T09:50:20.704523+0000#012modified#0112025-12-01T09:50:55.377737+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24223}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24223 members: 24223#012[mds.cephfs.compute-2.yoegjc{0:24223} state up:active seq 2 addr [v2:192.168.122.102:6804/3260542897,v1:192.168.122.102:6805/3260542897] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.xijran{-1:14532} state up:standby seq 1 addr [v2:192.168.122.100:6806/2856291086,v1:192.168.122.100:6807/2856291086] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.ijlzoi{-1:24176} state up:standby seq 1 addr [v2:192.168.122.101:6804/1552678510,v1:192.168.122.101:6805/1552678510] compat {c=[1],r=[1],i=[1fff]}]
Dec  1 04:50:59 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/1552678510,v1:192.168.122.101:6805/1552678510] up:boot
Dec  1 04:50:59 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.yoegjc=up:active} 2 up:standby
Dec  1 04:50:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.ijlzoi"} v 0)
Dec  1 04:50:59 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.ijlzoi"}]: dispatch
Dec  1 04:50:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).mds e8 all = 0
Dec  1 04:50:59 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v36: 198 pgs: 198 active+clean; 453 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 2.7 KiB/s wr, 10 op/s
Dec  1 04:50:59 np0005540825 ceph-mgr[74709]: [progress INFO root] Writing back 16 completed events
Dec  1 04:50:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  1 04:50:59 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:50:59 np0005540825 ceph-mgr[74709]: [progress INFO root] Completed event 17884586-1c31-4b15-9e01-041d1c660d29 (Global Recovery Event) in 10 seconds
Dec  1 04:51:00 np0005540825 ceph-mon[74416]: Bind address in nfs.cephfs.0.0.compute-1.osfnzc's ganesha conf is defaulting to empty
Dec  1 04:51:00 np0005540825 ceph-mon[74416]: Deploying daemon nfs.cephfs.0.0.compute-1.osfnzc on compute-1
Dec  1 04:51:00 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  1 04:51:00 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  1 04:51:00 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 04:51:00 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:00 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.ymqwfj
Dec  1 04:51:00 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.ymqwfj
Dec  1 04:51:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ymqwfj", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Dec  1 04:51:00 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ymqwfj", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec  1 04:51:00 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ymqwfj", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec  1 04:51:00 np0005540825 ceph-mgr[74709]: [cephadm INFO root] Ensuring nfs.cephfs.1 is in the ganesha grace table
Dec  1 04:51:00 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.1 is in the ganesha grace table
Dec  1 04:51:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Dec  1 04:51:00 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec  1 04:51:00 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec  1 04:51:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:51:00 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:51:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).mds e9 new map
Dec  1 04:51:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).mds e9 print_map#012e9#012btime 2025-12-01T09:51:01:191346+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-01T09:50:20.704523+0000#012modified#0112025-12-01T09:50:55.377737+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24223}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24223 members: 24223#012[mds.cephfs.compute-2.yoegjc{0:24223} state up:active seq 2 addr [v2:192.168.122.102:6804/3260542897,v1:192.168.122.102:6805/3260542897] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.xijran{-1:14532} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2856291086,v1:192.168.122.100:6807/2856291086] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.ijlzoi{-1:24176} state up:standby seq 1 addr [v2:192.168.122.101:6804/1552678510,v1:192.168.122.101:6805/1552678510] compat {c=[1],r=[1],i=[1fff]}]
Dec  1 04:51:01 np0005540825 ceph-mds[95644]: mds.cephfs.compute-0.xijran Updating MDS map to version 9 from mon.0
Dec  1 04:51:01 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/2856291086,v1:192.168.122.100:6807/2856291086] up:standby
Dec  1 04:51:01 np0005540825 ceph-mon[74416]: log_channel(cluster) log [INF] : Dropping low affinity active daemon mds.cephfs.compute-2.yoegjc in favor of higher affinity standby.
Dec  1 04:51:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).mds e9  replacing 24223 [v2:192.168.122.102:6804/3260542897,v1:192.168.122.102:6805/3260542897] mds.0.4 up:active with 14532/cephfs.compute-0.xijran [v2:192.168.122.100:6806/2856291086,v1:192.168.122.100:6807/2856291086]
Dec  1 04:51:01 np0005540825 ceph-mon[74416]: log_channel(cluster) log [WRN] : Replacing daemon mds.cephfs.compute-2.yoegjc as rank 0 with standby daemon mds.cephfs.compute-0.xijran
Dec  1 04:51:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).mds e9 fail_mds_gid 24223 mds.cephfs.compute-2.yoegjc role 0
Dec  1 04:51:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Dec  1 04:51:01 np0005540825 ceph-mon[74416]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is degraded (FS_DEGRADED)
Dec  1 04:51:01 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.yoegjc=up:active} 2 up:standby
Dec  1 04:51:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).mds e10 new map
Dec  1 04:51:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).mds e10 print_map#012e10#012btime 2025-12-01T09:51:01:219485+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01110#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-01T09:50:20.704523+0000#012modified#0112025-12-01T09:51:01.219484+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01157#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14532}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-0.xijran{0:14532} state up:replay seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2856291086,v1:192.168.122.100:6807/2856291086] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-1.ijlzoi{-1:24176} state up:standby seq 1 addr [v2:192.168.122.101:6804/1552678510,v1:192.168.122.101:6805/1552678510] compat {c=[1],r=[1],i=[1fff]}]
Dec  1 04:51:01 np0005540825 ceph-mds[95644]: mds.cephfs.compute-0.xijran Updating MDS map to version 10 from mon.0
Dec  1 04:51:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Dec  1 04:51:01 np0005540825 ceph-mds[95644]: mds.0.10 handle_mds_map I am now mds.0.10
Dec  1 04:51:01 np0005540825 ceph-mds[95644]: mds.0.10 handle_mds_map state change up:standby --> up:replay
Dec  1 04:51:01 np0005540825 ceph-mds[95644]: mds.0.10 replay_start
Dec  1 04:51:01 np0005540825 ceph-mds[95644]: mds.0.10  waiting for osdmap 57 (which blocklists prior instance)
Dec  1 04:51:01 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Dec  1 04:51:01 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : fsmap cephfs:1/1 {0=cephfs.compute-0.xijran=up:replay} 1 up:standby
Dec  1 04:51:01 np0005540825 ceph-mds[95644]: mds.0.cache creating system inode with ino:0x100
Dec  1 04:51:01 np0005540825 ceph-mds[95644]: mds.0.cache creating system inode with ino:0x1
Dec  1 04:51:01 np0005540825 ceph-mds[95644]: mds.0.10 Finished replaying journal
Dec  1 04:51:01 np0005540825 ceph-mds[95644]: mds.0.10 making mds journal writeable
Dec  1 04:51:01 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v38: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 212 KiB/s rd, 9.2 KiB/s wr, 399 op/s
Dec  1 04:51:02 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:02 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:02 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:02 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ymqwfj", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec  1 04:51:02 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ymqwfj", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec  1 04:51:02 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec  1 04:51:02 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec  1 04:51:02 np0005540825 ceph-mon[74416]: Dropping low affinity active daemon mds.cephfs.compute-2.yoegjc in favor of higher affinity standby.
Dec  1 04:51:02 np0005540825 ceph-mon[74416]: Replacing daemon mds.cephfs.compute-2.yoegjc as rank 0 with standby daemon mds.cephfs.compute-0.xijran
Dec  1 04:51:02 np0005540825 ceph-mon[74416]: Health check failed: 1 filesystem is degraded (FS_DEGRADED)
Dec  1 04:51:02 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).mds e11 new map
Dec  1 04:51:02 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).mds e11 print_map#012e11#012btime 2025-12-01T09:51:02:233755+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01111#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-01T09:50:20.704523+0000#012modified#0112025-12-01T09:51:01.259526+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01157#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14532}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-0.xijran{0:14532} state up:reconnect seq 3 join_fscid=1 addr [v2:192.168.122.100:6806/2856291086,v1:192.168.122.100:6807/2856291086] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-1.ijlzoi{-1:24176} state up:standby seq 1 addr [v2:192.168.122.101:6804/1552678510,v1:192.168.122.101:6805/1552678510] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-2.yoegjc{-1:24241} state up:standby seq 1 join_fscid=1 addr [v2:192.168.122.102:6804/3537925606,v1:192.168.122.102:6805/3537925606] compat {c=[1],r=[1],i=[1fff]}]
Dec  1 04:51:02 np0005540825 ceph-mds[95644]: mds.cephfs.compute-0.xijran Updating MDS map to version 11 from mon.0
Dec  1 04:51:02 np0005540825 ceph-mds[95644]: mds.0.10 handle_mds_map I am now mds.0.10
Dec  1 04:51:02 np0005540825 ceph-mds[95644]: mds.0.10 handle_mds_map state change up:replay --> up:reconnect
Dec  1 04:51:02 np0005540825 ceph-mds[95644]: mds.0.10 reconnect_start
Dec  1 04:51:02 np0005540825 ceph-mds[95644]: mds.0.10 reopen_log
Dec  1 04:51:02 np0005540825 ceph-mds[95644]: mds.0.10 reconnect_done
Dec  1 04:51:02 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/2856291086,v1:192.168.122.100:6807/2856291086] up:reconnect
Dec  1 04:51:02 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/3537925606,v1:192.168.122.102:6805/3537925606] up:boot
Dec  1 04:51:02 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : fsmap cephfs:1/1 {0=cephfs.compute-0.xijran=up:reconnect} 2 up:standby
Dec  1 04:51:02 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.yoegjc"} v 0)
Dec  1 04:51:02 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.yoegjc"}]: dispatch
Dec  1 04:51:02 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).mds e11 all = 0
Dec  1 04:51:03 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:51:03 np0005540825 ceph-mon[74416]: Creating key for client.nfs.cephfs.1.0.compute-2.ymqwfj
Dec  1 04:51:03 np0005540825 ceph-mon[74416]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Dec  1 04:51:03 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).mds e12 new map
Dec  1 04:51:03 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).mds e12 print_map#012e12#012btime 2025-12-01T09:51:03:341186+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-01T09:50:20.704523+0000#012modified#0112025-12-01T09:51:02.346773+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01157#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14532}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-0.xijran{0:14532} state up:rejoin seq 4 join_fscid=1 addr [v2:192.168.122.100:6806/2856291086,v1:192.168.122.100:6807/2856291086] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-1.ijlzoi{-1:24176} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/1552678510,v1:192.168.122.101:6805/1552678510] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-2.yoegjc{-1:24241} state up:standby seq 1 join_fscid=1 addr [v2:192.168.122.102:6804/3537925606,v1:192.168.122.102:6805/3537925606] compat {c=[1],r=[1],i=[1fff]}]
Dec  1 04:51:03 np0005540825 ceph-mds[95644]: mds.cephfs.compute-0.xijran Updating MDS map to version 12 from mon.0
Dec  1 04:51:03 np0005540825 ceph-mds[95644]: mds.0.10 handle_mds_map I am now mds.0.10
Dec  1 04:51:03 np0005540825 ceph-mds[95644]: mds.0.10 handle_mds_map state change up:reconnect --> up:rejoin
Dec  1 04:51:03 np0005540825 ceph-mds[95644]: mds.0.10 rejoin_start
Dec  1 04:51:03 np0005540825 ceph-mds[95644]: mds.0.10 rejoin_joint_start
Dec  1 04:51:03 np0005540825 ceph-mds[95644]: mds.0.10 rejoin_done
Dec  1 04:51:03 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/2856291086,v1:192.168.122.100:6807/2856291086] up:rejoin
Dec  1 04:51:03 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/1552678510,v1:192.168.122.101:6805/1552678510] up:standby
Dec  1 04:51:03 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : fsmap cephfs:1/1 {0=cephfs.compute-0.xijran=up:rejoin} 2 up:standby
Dec  1 04:51:03 np0005540825 ceph-mon[74416]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.xijran is now active in filesystem cephfs as rank 0
Dec  1 04:51:03 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v39: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 160 KiB/s rd, 6.9 KiB/s wr, 301 op/s
Dec  1 04:51:03 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Dec  1 04:51:03 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec  1 04:51:04 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec  1 04:51:04 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Dec  1 04:51:04 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Dec  1 04:51:04 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.ymqwfj-rgw
Dec  1 04:51:04 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.ymqwfj-rgw
Dec  1 04:51:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ymqwfj-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec  1 04:51:04 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ymqwfj-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  1 04:51:04 np0005540825 ceph-mon[74416]: log_channel(cluster) log [INF] : Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded)
Dec  1 04:51:04 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ymqwfj-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  1 04:51:04 np0005540825 ceph-mgr[74709]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.1.0.compute-2.ymqwfj's ganesha conf is defaulting to empty
Dec  1 04:51:04 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.1.0.compute-2.ymqwfj's ganesha conf is defaulting to empty
Dec  1 04:51:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:51:04 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:51:04 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.1.0.compute-2.ymqwfj on compute-2
Dec  1 04:51:04 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.1.0.compute-2.ymqwfj on compute-2
Dec  1 04:51:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).mds e13 new map
Dec  1 04:51:04 np0005540825 ceph-mon[74416]: daemon mds.cephfs.compute-0.xijran is now active in filesystem cephfs as rank 0
Dec  1 04:51:04 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec  1 04:51:04 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec  1 04:51:04 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ymqwfj-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  1 04:51:04 np0005540825 ceph-mds[95644]: mds.cephfs.compute-0.xijran Updating MDS map to version 13 from mon.0
Dec  1 04:51:04 np0005540825 ceph-mds[95644]: mds.0.10 handle_mds_map I am now mds.0.10
Dec  1 04:51:04 np0005540825 ceph-mds[95644]: mds.0.10 handle_mds_map state change up:rejoin --> up:active
Dec  1 04:51:04 np0005540825 ceph-mds[95644]: mds.0.10 recovery_done -- successful recovery!
Dec  1 04:51:04 np0005540825 ceph-mds[95644]: mds.0.10 active_start
Dec  1 04:51:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).mds e13 print_map#012e13#012btime 2025-12-01T09:51:04:350567+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01113#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-01T09:50:20.704523+0000#012modified#0112025-12-01T09:51:04.350563+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01157#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14532}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 14532 members: 14532#012[mds.cephfs.compute-0.xijran{0:14532} state up:active seq 5 join_fscid=1 addr [v2:192.168.122.100:6806/2856291086,v1:192.168.122.100:6807/2856291086] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-1.ijlzoi{-1:24176} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/1552678510,v1:192.168.122.101:6805/1552678510] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-2.yoegjc{-1:24241} state up:standby seq 1 join_fscid=1 addr [v2:192.168.122.102:6804/3537925606,v1:192.168.122.102:6805/3537925606] compat {c=[1],r=[1],i=[1fff]}]
Dec  1 04:51:04 np0005540825 ceph-mds[95644]: mds.0.10 cluster recovered.
Dec  1 04:51:04 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/2856291086,v1:192.168.122.100:6807/2856291086] up:active
Dec  1 04:51:04 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.xijran=up:active} 2 up:standby
Dec  1 04:51:04 np0005540825 ceph-mgr[74709]: [progress INFO root] Writing back 17 completed events
Dec  1 04:51:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  1 04:51:04 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:05 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v40: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 145 KiB/s rd, 4.5 KiB/s wr, 268 op/s
Dec  1 04:51:05 np0005540825 ceph-mon[74416]: Rados config object exists: conf-nfs.cephfs
Dec  1 04:51:05 np0005540825 ceph-mon[74416]: Creating key for client.nfs.cephfs.1.0.compute-2.ymqwfj-rgw
Dec  1 04:51:05 np0005540825 ceph-mon[74416]: Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded)
Dec  1 04:51:05 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ymqwfj-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  1 04:51:05 np0005540825 ceph-mon[74416]: Bind address in nfs.cephfs.1.0.compute-2.ymqwfj's ganesha conf is defaulting to empty
Dec  1 04:51:05 np0005540825 ceph-mon[74416]: Deploying daemon nfs.cephfs.1.0.compute-2.ymqwfj on compute-2
Dec  1 04:51:05 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:06 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  1 04:51:06 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:06 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  1 04:51:06 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:06 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 04:51:06 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:06 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.pytvsu
Dec  1 04:51:06 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.pytvsu
Dec  1 04:51:06 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.pytvsu", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Dec  1 04:51:06 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.pytvsu", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec  1 04:51:06 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.pytvsu", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec  1 04:51:06 np0005540825 ceph-mgr[74709]: [cephadm INFO root] Ensuring nfs.cephfs.2 is in the ganesha grace table
Dec  1 04:51:06 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.2 is in the ganesha grace table
Dec  1 04:51:06 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Dec  1 04:51:06 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec  1 04:51:06 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec  1 04:51:06 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:51:06 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:51:06 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:06 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:06 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:06 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.pytvsu", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec  1 04:51:06 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.pytvsu", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec  1 04:51:06 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec  1 04:51:06 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec  1 04:51:06 np0005540825 ceph-mds[95644]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Dec  1 04:51:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mds-cephfs-compute-0-xijran[95633]: 2025-12-01T09:51:06.903+0000 7fa0858a3640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Dec  1 04:51:07 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v41: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 134 KiB/s rd, 4.1 KiB/s wr, 241 op/s
Dec  1 04:51:07 np0005540825 ceph-mon[74416]: Creating key for client.nfs.cephfs.2.0.compute-0.pytvsu
Dec  1 04:51:07 np0005540825 ceph-mon[74416]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Dec  1 04:51:08 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:51:09 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v42: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 134 KiB/s rd, 4.1 KiB/s wr, 241 op/s
Dec  1 04:51:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Dec  1 04:51:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec  1 04:51:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec  1 04:51:09 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec  1 04:51:09 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec  1 04:51:09 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Dec  1 04:51:09 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Dec  1 04:51:09 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.pytvsu-rgw
Dec  1 04:51:09 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.pytvsu-rgw
Dec  1 04:51:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.pytvsu-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec  1 04:51:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.pytvsu-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  1 04:51:10 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.pytvsu-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  1 04:51:10 np0005540825 ceph-mgr[74709]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.2.0.compute-0.pytvsu's ganesha conf is defaulting to empty
Dec  1 04:51:10 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.2.0.compute-0.pytvsu's ganesha conf is defaulting to empty
Dec  1 04:51:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:51:10 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:51:10 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.2.0.compute-0.pytvsu on compute-0
Dec  1 04:51:10 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.2.0.compute-0.pytvsu on compute-0
Dec  1 04:51:10 np0005540825 podman[95916]: 2025-12-01 09:51:10.668573652 +0000 UTC m=+0.050013572 container create c6bbc4f000c6b0e0483a95d7fd9689dec1b1ce8d2c088398d424ee2c264ac307 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_payne, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  1 04:51:10 np0005540825 systemd[1]: Started libpod-conmon-c6bbc4f000c6b0e0483a95d7fd9689dec1b1ce8d2c088398d424ee2c264ac307.scope.
Dec  1 04:51:10 np0005540825 podman[95916]: 2025-12-01 09:51:10.646476219 +0000 UTC m=+0.027916169 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:51:10 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:51:10 np0005540825 podman[95916]: 2025-12-01 09:51:10.762350549 +0000 UTC m=+0.143790499 container init c6bbc4f000c6b0e0483a95d7fd9689dec1b1ce8d2c088398d424ee2c264ac307 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:51:10 np0005540825 podman[95916]: 2025-12-01 09:51:10.770508384 +0000 UTC m=+0.151948314 container start c6bbc4f000c6b0e0483a95d7fd9689dec1b1ce8d2c088398d424ee2c264ac307 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:51:10 np0005540825 sweet_payne[95932]: 167 167
Dec  1 04:51:10 np0005540825 systemd[1]: libpod-c6bbc4f000c6b0e0483a95d7fd9689dec1b1ce8d2c088398d424ee2c264ac307.scope: Deactivated successfully.
Dec  1 04:51:10 np0005540825 podman[95916]: 2025-12-01 09:51:10.777182491 +0000 UTC m=+0.158622441 container attach c6bbc4f000c6b0e0483a95d7fd9689dec1b1ce8d2c088398d424ee2c264ac307 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  1 04:51:10 np0005540825 conmon[95932]: conmon c6bbc4f000c6b0e0483a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c6bbc4f000c6b0e0483a95d7fd9689dec1b1ce8d2c088398d424ee2c264ac307.scope/container/memory.events
Dec  1 04:51:10 np0005540825 podman[95916]: 2025-12-01 09:51:10.778633239 +0000 UTC m=+0.160073179 container died c6bbc4f000c6b0e0483a95d7fd9689dec1b1ce8d2c088398d424ee2c264ac307 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_payne, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  1 04:51:10 np0005540825 systemd[1]: var-lib-containers-storage-overlay-7b2fa39352d82874d8bf1fe0c12755919347ca030d730b71f922da3c16be9011-merged.mount: Deactivated successfully.
Dec  1 04:51:10 np0005540825 podman[95916]: 2025-12-01 09:51:10.826106993 +0000 UTC m=+0.207546923 container remove c6bbc4f000c6b0e0483a95d7fd9689dec1b1ce8d2c088398d424ee2c264ac307 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_payne, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  1 04:51:10 np0005540825 systemd[1]: libpod-conmon-c6bbc4f000c6b0e0483a95d7fd9689dec1b1ce8d2c088398d424ee2c264ac307.scope: Deactivated successfully.
Dec  1 04:51:10 np0005540825 systemd[1]: Reloading.
Dec  1 04:51:10 np0005540825 ceph-mon[74416]: Rados config object exists: conf-nfs.cephfs
Dec  1 04:51:10 np0005540825 ceph-mon[74416]: Creating key for client.nfs.cephfs.2.0.compute-0.pytvsu-rgw
Dec  1 04:51:10 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.pytvsu-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  1 04:51:10 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.pytvsu-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  1 04:51:10 np0005540825 ceph-mon[74416]: Bind address in nfs.cephfs.2.0.compute-0.pytvsu's ganesha conf is defaulting to empty
Dec  1 04:51:10 np0005540825 ceph-mon[74416]: Deploying daemon nfs.cephfs.2.0.compute-0.pytvsu on compute-0
Dec  1 04:51:10 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:51:10 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:51:11 np0005540825 systemd[1]: Reloading.
Dec  1 04:51:11 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:51:11 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:51:11 np0005540825 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.pytvsu for 365f19c2-81e5-5edd-b6b4-280555214d3a...
Dec  1 04:51:11 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v43: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 9.9 KiB/s rd, 1.2 KiB/s wr, 11 op/s
Dec  1 04:51:11 np0005540825 podman[96075]: 2025-12-01 09:51:11.737468482 +0000 UTC m=+0.042424162 container create 385d0b8a0770a5cfcc609cc2d998a61d24533494ce0bce025dda1e75042f6acf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec  1 04:51:11 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5630288d60feffebc6f30f0a5d2221ded4dfcbd43b00316a73953fb6ddb69b29/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec  1 04:51:11 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5630288d60feffebc6f30f0a5d2221ded4dfcbd43b00316a73953fb6ddb69b29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:51:11 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5630288d60feffebc6f30f0a5d2221ded4dfcbd43b00316a73953fb6ddb69b29/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 04:51:11 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5630288d60feffebc6f30f0a5d2221ded4dfcbd43b00316a73953fb6ddb69b29/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.pytvsu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 04:51:11 np0005540825 podman[96075]: 2025-12-01 09:51:11.808767415 +0000 UTC m=+0.113723115 container init 385d0b8a0770a5cfcc609cc2d998a61d24533494ce0bce025dda1e75042f6acf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  1 04:51:11 np0005540825 podman[96075]: 2025-12-01 09:51:11.719183109 +0000 UTC m=+0.024138819 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:51:11 np0005540825 podman[96075]: 2025-12-01 09:51:11.815560574 +0000 UTC m=+0.120516254 container start 385d0b8a0770a5cfcc609cc2d998a61d24533494ce0bce025dda1e75042f6acf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  1 04:51:11 np0005540825 bash[96075]: 385d0b8a0770a5cfcc609cc2d998a61d24533494ce0bce025dda1e75042f6acf
Dec  1 04:51:11 np0005540825 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.pytvsu for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 04:51:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:11 : epoch 692d650f : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec  1 04:51:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:11 : epoch 692d650f : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec  1 04:51:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:11 : epoch 692d650f : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec  1 04:51:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:11 : epoch 692d650f : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec  1 04:51:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:11 : epoch 692d650f : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec  1 04:51:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:11 : epoch 692d650f : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec  1 04:51:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:11 : epoch 692d650f : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec  1 04:51:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:51:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:11 : epoch 692d650f : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 04:51:12 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:12 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:51:12 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:12 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 04:51:12 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:12 np0005540825 ceph-mgr[74709]: [progress INFO root] complete: finished ev b597e2af-8ba6-421f-aab8-779d09d44701 (Updating nfs.cephfs deployment (+3 -> 3))
Dec  1 04:51:12 np0005540825 ceph-mgr[74709]: [progress INFO root] Completed event b597e2af-8ba6-421f-aab8-779d09d44701 (Updating nfs.cephfs deployment (+3 -> 3)) in 14 seconds
Dec  1 04:51:12 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 04:51:12 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:12 np0005540825 ceph-mgr[74709]: [progress INFO root] update: starting ev e3f3bcef-0ef6-4f1a-8ef4-781e47f84427 (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Dec  1 04:51:12 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/monitor_password}] v 0)
Dec  1 04:51:12 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:12 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-1.pwynis on compute-1
Dec  1 04:51:12 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-1.pwynis on compute-1
Dec  1 04:51:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:12 : epoch 692d650f : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Dec  1 04:51:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:12 : epoch 692d650f : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Dec  1 04:51:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:12 : epoch 692d650f : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 04:51:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:12 : epoch 692d650f : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 04:51:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:12 : epoch 692d650f : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 04:51:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:12 : epoch 692d650f : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 04:51:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:12 : epoch 692d650f : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 04:51:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:12 : epoch 692d650f : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 04:51:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:12 : epoch 692d650f : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000003:nfs.cephfs.2: -2
Dec  1 04:51:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:12 : epoch 692d650f : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  1 04:51:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:12 : epoch 692d650f : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec  1 04:51:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:12 : epoch 692d650f : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec  1 04:51:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:12 : epoch 692d650f : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec  1 04:51:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:12 : epoch 692d650f : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec  1 04:51:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:12 : epoch 692d650f : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec  1 04:51:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:12 : epoch 692d650f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec  1 04:51:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:12 : epoch 692d650f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  1 04:51:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:12 : epoch 692d650f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  1 04:51:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:12 : epoch 692d650f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  1 04:51:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:12 : epoch 692d650f : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec  1 04:51:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:12 : epoch 692d650f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  1 04:51:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:12 : epoch 692d650f : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec  1 04:51:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:12 : epoch 692d650f : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec  1 04:51:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:12 : epoch 692d650f : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec  1 04:51:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:12 : epoch 692d650f : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec  1 04:51:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:12 : epoch 692d650f : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec  1 04:51:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:12 : epoch 692d650f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec  1 04:51:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:12 : epoch 692d650f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec  1 04:51:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:12 : epoch 692d650f : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec  1 04:51:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:12 : epoch 692d650f : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec  1 04:51:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:12 : epoch 692d650f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec  1 04:51:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:12 : epoch 692d650f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec  1 04:51:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:12 : epoch 692d650f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec  1 04:51:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:12 : epoch 692d650f : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  1 04:51:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:12 : epoch 692d650f : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec  1 04:51:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:12 : epoch 692d650f : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  1 04:51:13 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:51:13 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:13 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:13 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:13 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:13 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:13 np0005540825 ceph-mon[74416]: Deploying daemon haproxy.nfs.cephfs.compute-1.pwynis on compute-1
Dec  1 04:51:13 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v44: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 8.5 KiB/s rd, 1023 B/s wr, 9 op/s
Dec  1 04:51:14 np0005540825 ceph-mgr[74709]: [progress INFO root] Writing back 18 completed events
Dec  1 04:51:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  1 04:51:15 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v45: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 8.5 KiB/s rd, 1023 B/s wr, 9 op/s
Dec  1 04:51:15 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:16 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:16 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  1 04:51:16 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:16 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  1 04:51:17 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:17 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec  1 04:51:17 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:17 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-0.alcixd on compute-0
Dec  1 04:51:17 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-0.alcixd on compute-0
Dec  1 04:51:17 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v46: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 2.0 KiB/s wr, 13 op/s
Dec  1 04:51:18 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:18 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:18 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:18 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad38000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:18 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:51:19 np0005540825 ceph-mon[74416]: Deploying daemon haproxy.nfs.cephfs.compute-0.alcixd on compute-0
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_09:51:19
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['default.rgw.meta', '.nfs', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', 'default.rgw.control', '.mgr', 'volumes', '.rgw.root', 'vms', 'images']
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v47: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 4.6 KiB/s rd, 1.8 KiB/s wr, 6 op/s
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 1)
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Dec  1 04:51:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"} v 0)
Dec  1 04:51:19 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 04:51:19 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 04:51:20 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 04:51:20 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 04:51:20 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 04:51:20 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 04:51:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:20 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad200016e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Dec  1 04:51:20 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Dec  1 04:51:20 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Dec  1 04:51:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Dec  1 04:51:20 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Dec  1 04:51:20 np0005540825 ceph-mgr[74709]: [progress INFO root] update: starting ev cc8bcdef-c1e9-4631-afcc-1bf32fd1ab73 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec  1 04:51:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Dec  1 04:51:20 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Dec  1 04:51:21 np0005540825 podman[96237]: 2025-12-01 09:51:21.49926952 +0000 UTC m=+3.660731591 container create 53f4f2d5badb3045b2b9098e6cd2fd290cf9050f5704bcd1ffb6c071c13148ab (image=quay.io/ceph/haproxy:2.3, name=compassionate_hawking)
Dec  1 04:51:21 np0005540825 systemd[1]: Started libpod-conmon-53f4f2d5badb3045b2b9098e6cd2fd290cf9050f5704bcd1ffb6c071c13148ab.scope.
Dec  1 04:51:21 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:51:21 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v49: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.2 KiB/s wr, 5 op/s
Dec  1 04:51:21 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"} v 0)
Dec  1 04:51:21 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  1 04:51:21 np0005540825 podman[96237]: 2025-12-01 09:51:21.48262804 +0000 UTC m=+3.644090121 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec  1 04:51:21 np0005540825 podman[96237]: 2025-12-01 09:51:21.58522742 +0000 UTC m=+3.746689531 container init 53f4f2d5badb3045b2b9098e6cd2fd290cf9050f5704bcd1ffb6c071c13148ab (image=quay.io/ceph/haproxy:2.3, name=compassionate_hawking)
Dec  1 04:51:21 np0005540825 podman[96237]: 2025-12-01 09:51:21.590838128 +0000 UTC m=+3.752300189 container start 53f4f2d5badb3045b2b9098e6cd2fd290cf9050f5704bcd1ffb6c071c13148ab (image=quay.io/ceph/haproxy:2.3, name=compassionate_hawking)
Dec  1 04:51:21 np0005540825 podman[96237]: 2025-12-01 09:51:21.594621368 +0000 UTC m=+3.756083429 container attach 53f4f2d5badb3045b2b9098e6cd2fd290cf9050f5704bcd1ffb6c071c13148ab (image=quay.io/ceph/haproxy:2.3, name=compassionate_hawking)
Dec  1 04:51:21 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Dec  1 04:51:21 np0005540825 systemd[1]: libpod-53f4f2d5badb3045b2b9098e6cd2fd290cf9050f5704bcd1ffb6c071c13148ab.scope: Deactivated successfully.
Dec  1 04:51:21 np0005540825 compassionate_hawking[96355]: 0 0
Dec  1 04:51:21 np0005540825 conmon[96355]: conmon 53f4f2d5badb3045b2b9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-53f4f2d5badb3045b2b9098e6cd2fd290cf9050f5704bcd1ffb6c071c13148ab.scope/container/memory.events
Dec  1 04:51:21 np0005540825 podman[96237]: 2025-12-01 09:51:21.597163845 +0000 UTC m=+3.758625906 container died 53f4f2d5badb3045b2b9098e6cd2fd290cf9050f5704bcd1ffb6c071c13148ab (image=quay.io/ceph/haproxy:2.3, name=compassionate_hawking)
Dec  1 04:51:21 np0005540825 systemd[1]: var-lib-containers-storage-overlay-4d03893461237f529557a04f93c214b2c518dfef7c76288b0e4651c644cc6a6a-merged.mount: Deactivated successfully.
Dec  1 04:51:21 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Dec  1 04:51:21 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Dec  1 04:51:21 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  1 04:51:22 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:22 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad10000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:23 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec  1 04:51:23 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Dec  1 04:51:23 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Dec  1 04:51:23 np0005540825 podman[96237]: 2025-12-01 09:51:23.315130686 +0000 UTC m=+5.476592787 container remove 53f4f2d5badb3045b2b9098e6cd2fd290cf9050f5704bcd1ffb6c071c13148ab (image=quay.io/ceph/haproxy:2.3, name=compassionate_hawking)
Dec  1 04:51:23 np0005540825 systemd[1]: libpod-conmon-53f4f2d5badb3045b2b9098e6cd2fd290cf9050f5704bcd1ffb6c071c13148ab.scope: Deactivated successfully.
Dec  1 04:51:23 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v50: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.2 KiB/s wr, 5 op/s
Dec  1 04:51:23 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Dec  1 04:51:23 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"} v 0)
Dec  1 04:51:23 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  1 04:51:23 np0005540825 ceph-mgr[74709]: [progress INFO root] update: starting ev 3d84249c-d1e4-4528-97ca-0fefe7b153be (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec  1 04:51:23 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Dec  1 04:51:23 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Dec  1 04:51:23 np0005540825 systemd[1]: Reloading.
Dec  1 04:51:23 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:51:23 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:51:24 np0005540825 systemd[1]: Reloading.
Dec  1 04:51:24 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:51:24 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:51:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:24 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14000fa0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Dec  1 04:51:24 np0005540825 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-0.alcixd for 365f19c2-81e5-5edd-b6b4-280555214d3a...
Dec  1 04:51:24 np0005540825 podman[96500]: 2025-12-01 09:51:24.592571363 +0000 UTC m=+0.104985304 container create 0ce6b28b78cdc773acbae8987038033199adf9f2d08be5b101f663b41bdbf569 (image=quay.io/ceph/haproxy:2.3, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd)
Dec  1 04:51:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Dec  1 04:51:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec  1 04:51:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Dec  1 04:51:24 np0005540825 podman[96500]: 2025-12-01 09:51:24.506689285 +0000 UTC m=+0.019103246 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec  1 04:51:24 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Dec  1 04:51:24 np0005540825 ceph-mgr[74709]: [progress INFO root] update: starting ev b38a15e7-d291-4324-942f-e80d3c790feb (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec  1 04:51:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Dec  1 04:51:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Dec  1 04:51:25 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27bbd0dab2a0d4afc201be2a3bd37efe034c9b94a30478d82633dd1a05ee46de/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Dec  1 04:51:25 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v53: 229 pgs: 31 unknown, 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec  1 04:51:25 np0005540825 ceph-mgr[74709]: [progress WARNING root] Starting Global Recovery Event,31 pgs not in active + clean state
Dec  1 04:51:26 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Dec  1 04:51:26 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  1 04:51:26 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Dec  1 04:51:26 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  1 04:51:26 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Dec  1 04:51:26 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:26 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad34001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:26 np0005540825 podman[96500]: 2025-12-01 09:51:26.249788129 +0000 UTC m=+1.762202080 container init 0ce6b28b78cdc773acbae8987038033199adf9f2d08be5b101f663b41bdbf569 (image=quay.io/ceph/haproxy:2.3, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd)
Dec  1 04:51:26 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec  1 04:51:26 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Dec  1 04:51:26 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  1 04:51:26 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Dec  1 04:51:26 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Dec  1 04:51:26 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec  1 04:51:26 np0005540825 podman[96500]: 2025-12-01 09:51:26.256566848 +0000 UTC m=+1.768980789 container start 0ce6b28b78cdc773acbae8987038033199adf9f2d08be5b101f663b41bdbf569 (image=quay.io/ceph/haproxy:2.3, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd)
Dec  1 04:51:26 np0005540825 bash[96500]: 0ce6b28b78cdc773acbae8987038033199adf9f2d08be5b101f663b41bdbf569
Dec  1 04:51:26 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [NOTICE] 334/095126 (2) : New worker #1 (4) forked
Dec  1 04:51:26 np0005540825 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-0.alcixd for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 04:51:26 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec  1 04:51:26 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec  1 04:51:26 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec  1 04:51:26 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Dec  1 04:51:26 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/095126 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 04:51:26 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Dec  1 04:51:26 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 61 pg[10.0( v 56'1015 (0'0,56'1015] local-lis/les=50/51 n=178 ec=50/50 lis/c=50/50 les/c/f=51/51/0 sis=61 pruub=12.152910233s) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 lcod 56'1014 mlcod 56'1014 active pruub 190.881378174s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:26 np0005540825 ceph-mgr[74709]: [progress INFO root] update: starting ev 65342f04-d78b-4fb3-b932-3e8b9779fcb1 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec  1 04:51:26 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Dec  1 04:51:26 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec  1 04:51:26 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 61 pg[10.0( v 56'1015 lc 0'0 (0'0,56'1015] local-lis/les=50/51 n=5 ec=50/50 lis/c=50/50 les/c/f=51/51/0 sis=61 pruub=12.152910233s) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 lcod 56'1014 mlcod 0'0 unknown pruub 190.881378174s@ mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:26 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55ea059bd8c0) operator()   moving buffer(0x55ea046525c8 space 0x55ea04427390 0x0~1000 clean)
Dec  1 04:51:26 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55ea059bd8c0) operator()   moving buffer(0x55ea046528e8 space 0x55ea04584420 0x0~1000 clean)
Dec  1 04:51:26 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55ea059bd8c0) operator()   moving buffer(0x55ea04653568 space 0x55ea045ee420 0x0~1000 clean)
Dec  1 04:51:26 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55ea059bd8c0) operator()   moving buffer(0x55ea04662488 space 0x55ea045ee690 0x0~1000 clean)
Dec  1 04:51:26 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55ea059bd8c0) operator()   moving buffer(0x55ea04663a68 space 0x55ea044317a0 0x0~1000 clean)
Dec  1 04:51:26 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55ea059bd8c0) operator()   moving buffer(0x55ea046522a8 space 0x55ea04584900 0x0~1000 clean)
Dec  1 04:51:26 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55ea059bd8c0) operator()   moving buffer(0x55ea0461a168 space 0x55ea043e6420 0x0~1000 clean)
Dec  1 04:51:26 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55ea059bd8c0) operator()   moving buffer(0x55ea04187f68 space 0x55ea045ee900 0x0~1000 clean)
Dec  1 04:51:26 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55ea059bd8c0) operator()   moving buffer(0x55ea043b2168 space 0x55ea045ee1b0 0x0~1000 clean)
Dec  1 04:51:26 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55ea059bd8c0) operator()   moving buffer(0x55ea04662f28 space 0x55ea0456c4f0 0x0~1000 clean)
Dec  1 04:51:26 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55ea059bd8c0) operator()   moving buffer(0x55ea046539c8 space 0x55ea04426760 0x0~1000 clean)
Dec  1 04:51:26 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55ea059bd8c0) operator()   moving buffer(0x55ea04632168 space 0x55ea043d3a10 0x0~1000 clean)
Dec  1 04:51:26 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55ea059bd8c0) operator()   moving buffer(0x55ea04663428 space 0x55ea04517870 0x0~1000 clean)
Dec  1 04:51:26 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55ea059bd8c0) operator()   moving buffer(0x55ea04671248 space 0x55ea04584d10 0x0~1000 clean)
Dec  1 04:51:26 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55ea059bd8c0) operator()   moving buffer(0x55ea046628e8 space 0x55ea045ee830 0x0~1000 clean)
Dec  1 04:51:26 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55ea059bd8c0) operator()   moving buffer(0x55ea04663ec8 space 0x55ea045ee010 0x0~1000 clean)
Dec  1 04:51:26 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55ea059bd8c0) operator()   moving buffer(0x55ea0461ade8 space 0x55ea0447da10 0x0~1000 clean)
Dec  1 04:51:26 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55ea059bd8c0) operator()   moving buffer(0x55ea043f1248 space 0x55ea045ee280 0x0~1000 clean)
Dec  1 04:51:26 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55ea059bd8c0) operator()   moving buffer(0x55ea04671108 space 0x55ea043b00e0 0x0~1000 clean)
Dec  1 04:51:26 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55ea059bd8c0) operator()   moving buffer(0x55ea046623e8 space 0x55ea045ee4f0 0x0~1000 clean)
Dec  1 04:51:26 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55ea059bd8c0) operator()   moving buffer(0x55ea04670de8 space 0x55ea043ea0e0 0x0~1000 clean)
Dec  1 04:51:26 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55ea059bd8c0) operator()   moving buffer(0x55ea046532e8 space 0x55ea045ee760 0x0~1000 clean)
Dec  1 04:51:26 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55ea059bd8c0) operator()   moving buffer(0x55ea046705c8 space 0x55ea04585050 0x0~1000 clean)
Dec  1 04:51:26 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55ea059bd8c0) operator()   moving buffer(0x55ea04691928 space 0x55ea045849d0 0x0~1000 clean)
Dec  1 04:51:26 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55ea059bd8c0) operator()   moving buffer(0x55ea04652f28 space 0x55ea045ee5c0 0x0~1000 clean)
Dec  1 04:51:26 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55ea059bd8c0) operator()   moving buffer(0x55ea0441c3e8 space 0x55ea045ee350 0x0~1000 clean)
Dec  1 04:51:26 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55ea059bd8c0) operator()   moving buffer(0x55ea04653ba8 space 0x55ea04584280 0x0~1000 clean)
Dec  1 04:51:26 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55ea059bd8c0) operator()   moving buffer(0x55ea04632988 space 0x55ea045856d0 0x0~1000 clean)
Dec  1 04:51:26 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55ea059bd8c0) operator()   moving buffer(0x55ea046625c8 space 0x55ea045ee0e0 0x0~1000 clean)
Dec  1 04:51:26 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55ea059bd8c0) operator()   moving buffer(0x55ea04632348 space 0x55ea046ca280 0x0~1000 clean)
Dec  1 04:51:26 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:51:26 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:26 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:51:27 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Dec  1 04:51:27 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:27 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec  1 04:51:27 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v55: 291 pgs: 62 unknown, 32 peering, 197 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec  1 04:51:27 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Dec  1 04:51:27 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  1 04:51:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:27 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20002000 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:27 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec  1 04:51:27 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Dec  1 04:51:27 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Dec  1 04:51:27 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  1 04:51:27 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  1 04:51:27 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec  1 04:51:27 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec  1 04:51:27 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec  1 04:51:27 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec  1 04:51:27 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:27 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Dec  1 04:51:27 np0005540825 ceph-mgr[74709]: [progress INFO root] update: starting ev 1349958d-6f1b-4229-8662-d1f07f672198 (PG autoscaler increasing pool 12 PGs from 1 to 32)
Dec  1 04:51:27 np0005540825 ceph-mgr[74709]: [progress INFO root] complete: finished ev cc8bcdef-c1e9-4631-afcc-1bf32fd1ab73 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec  1 04:51:27 np0005540825 ceph-mgr[74709]: [progress INFO root] Completed event cc8bcdef-c1e9-4631-afcc-1bf32fd1ab73 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 7 seconds
Dec  1 04:51:27 np0005540825 ceph-mgr[74709]: [progress INFO root] complete: finished ev 3d84249c-d1e4-4528-97ca-0fefe7b153be (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec  1 04:51:27 np0005540825 ceph-mgr[74709]: [progress INFO root] Completed event 3d84249c-d1e4-4528-97ca-0fefe7b153be (PG autoscaler increasing pool 9 PGs from 1 to 32) in 4 seconds
Dec  1 04:51:27 np0005540825 ceph-mgr[74709]: [progress INFO root] complete: finished ev b38a15e7-d291-4324-942f-e80d3c790feb (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec  1 04:51:27 np0005540825 ceph-mgr[74709]: [progress INFO root] Completed event b38a15e7-d291-4324-942f-e80d3c790feb (PG autoscaler increasing pool 10 PGs from 1 to 32) in 3 seconds
Dec  1 04:51:27 np0005540825 ceph-mgr[74709]: [progress INFO root] complete: finished ev 65342f04-d78b-4fb3-b932-3e8b9779fcb1 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec  1 04:51:27 np0005540825 ceph-mgr[74709]: [progress INFO root] Completed event 65342f04-d78b-4fb3-b932-3e8b9779fcb1 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 2 seconds
Dec  1 04:51:27 np0005540825 ceph-mgr[74709]: [progress INFO root] complete: finished ev 1349958d-6f1b-4229-8662-d1f07f672198 (PG autoscaler increasing pool 12 PGs from 1 to 32)
Dec  1 04:51:27 np0005540825 ceph-mgr[74709]: [progress INFO root] Completed event 1349958d-6f1b-4229-8662-d1f07f672198 (PG autoscaler increasing pool 12 PGs from 1 to 32) in 0 seconds
Dec  1 04:51:27 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.7( v 56'1015 lc 0'0 (0'0,56'1015] local-lis/les=50/51 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:27 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.12( v 56'1015 lc 0'0 (0'0,56'1015] local-lis/les=50/51 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:27 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.11( v 56'1015 lc 0'0 (0'0,56'1015] local-lis/les=50/51 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:27 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.10( v 56'1015 lc 0'0 (0'0,56'1015] local-lis/les=50/51 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:27 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.1f( v 56'1015 lc 0'0 (0'0,56'1015] local-lis/les=50/51 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:27 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.1b( v 56'1015 lc 0'0 (0'0,56'1015] local-lis/les=50/51 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:27 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.1e( v 56'1015 lc 0'0 (0'0,56'1015] local-lis/les=50/51 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:27 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.1d( v 56'1015 lc 0'0 (0'0,56'1015] local-lis/les=50/51 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:27 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.1c( v 56'1015 lc 0'0 (0'0,56'1015] local-lis/les=50/51 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:27 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.1a( v 56'1015 lc 0'0 (0'0,56'1015] local-lis/les=50/51 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:27 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.19( v 56'1015 lc 0'0 (0'0,56'1015] local-lis/les=50/51 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:27 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.6( v 56'1015 lc 0'0 (0'0,56'1015] local-lis/les=50/51 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:27 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.18( v 56'1015 lc 0'0 (0'0,56'1015] local-lis/les=50/51 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:27 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.5( v 56'1015 lc 0'0 (0'0,56'1015] local-lis/les=50/51 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:27 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.4( v 56'1015 lc 0'0 (0'0,56'1015] local-lis/les=50/51 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:27 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.3( v 56'1015 lc 0'0 (0'0,56'1015] local-lis/les=50/51 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:27 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.b( v 56'1015 lc 0'0 (0'0,56'1015] local-lis/les=50/51 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:27 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.8( v 56'1015 lc 0'0 (0'0,56'1015] local-lis/les=50/51 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:27 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.d( v 56'1015 lc 0'0 (0'0,56'1015] local-lis/les=50/51 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:27 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.9( v 56'1015 lc 0'0 (0'0,56'1015] local-lis/les=50/51 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:27 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.c( v 56'1015 lc 0'0 (0'0,56'1015] local-lis/les=50/51 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:27 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.a( v 56'1015 lc 0'0 (0'0,56'1015] local-lis/les=50/51 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:27 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.e( v 56'1015 lc 0'0 (0'0,56'1015] local-lis/les=50/51 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:27 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.f( v 56'1015 lc 0'0 (0'0,56'1015] local-lis/les=50/51 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:27 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.1( v 56'1015 lc 0'0 (0'0,56'1015] local-lis/les=50/51 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:27 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.13( v 56'1015 lc 0'0 (0'0,56'1015] local-lis/les=50/51 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:27 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.14( v 56'1015 lc 0'0 (0'0,56'1015] local-lis/les=50/51 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:27 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.16( v 56'1015 lc 0'0 (0'0,56'1015] local-lis/les=50/51 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:27 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.15( v 56'1015 lc 0'0 (0'0,56'1015] local-lis/les=50/51 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:27 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.2( v 56'1015 lc 0'0 (0'0,56'1015] local-lis/les=50/51 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:27 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.17( v 56'1015 lc 0'0 (0'0,56'1015] local-lis/les=50/51 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:27 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:27 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-2.bdogrt on compute-2
Dec  1 04:51:27 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-2.bdogrt on compute-2
Dec  1 04:51:27 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.12( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:27 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.7( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:28 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.10( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:28 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.1f( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:28 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.1e( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:28 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.1d( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:28 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.1c( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:28 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.19( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:28 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.1a( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:28 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.6( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:28 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.18( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:28 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.4( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:28 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.5( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:28 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.3( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:28 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.8( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:28 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.d( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:28 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.b( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:28 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.11( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:28 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.9( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:28 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.a( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:28 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.c( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:28 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.1b( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:28 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.e( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:28 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.0( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=50/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 lcod 56'1014 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:28 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.1( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:28 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.13( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:28 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.f( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:28 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.17( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:28 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.15( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:28 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.16( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:28 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.14( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:28 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 62 pg[10.2( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [1] r=0 lpr=61 pi=[50,61)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:28 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:51:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:28 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad100016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:28 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Dec  1 04:51:28 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Dec  1 04:51:28 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Dec  1 04:51:28 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec  1 04:51:28 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Dec  1 04:51:28 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:28 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  1 04:51:28 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec  1 04:51:28 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:28 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Dec  1 04:51:29 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v58: 322 pgs: 93 unknown, 32 peering, 197 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec  1 04:51:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Dec  1 04:51:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  1 04:51:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:29 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:29 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Dec  1 04:51:29 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Dec  1 04:51:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Dec  1 04:51:29 np0005540825 ceph-mon[74416]: Deploying daemon haproxy.nfs.cephfs.compute-2.bdogrt on compute-2
Dec  1 04:51:29 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec  1 04:51:29 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  1 04:51:30 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:30 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad340025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:30 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec  1 04:51:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Dec  1 04:51:30 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Dec  1 04:51:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 64 pg[12.0( empty local-lis/les=54/55 n=0 ec=54/54 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=13.264980316s) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 active pruub 196.053680420s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 64 pg[12.0( empty local-lis/les=54/55 n=0 ec=54/54 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=13.264980316s) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown pruub 196.053680420s@ mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:30 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 10.1f deep-scrub starts
Dec  1 04:51:30 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 10.1f deep-scrub ok
Dec  1 04:51:30 np0005540825 ceph-mgr[74709]: [progress INFO root] Writing back 23 completed events
Dec  1 04:51:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  1 04:51:30 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:31 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Dec  1 04:51:31 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec  1 04:51:31 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v60: 353 pgs: 1 peering, 31 unknown, 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec  1 04:51:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:31 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20002000 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:31 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Dec  1 04:51:31 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Dec  1 04:51:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:32 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad100016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Dec  1 04:51:32 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Dec  1 04:51:32 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Dec  1 04:51:32 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Dec  1 04:51:33 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:51:33 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.11( empty local-lis/les=54/55 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.13( empty local-lis/les=54/55 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.10( empty local-lis/les=54/55 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.12( empty local-lis/les=54/55 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.15( empty local-lis/les=54/55 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.4( empty local-lis/les=54/55 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.7( empty local-lis/les=54/55 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.6( empty local-lis/les=54/55 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.9( empty local-lis/les=54/55 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.8( empty local-lis/les=54/55 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.f( empty local-lis/les=54/55 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.a( empty local-lis/les=54/55 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.b( empty local-lis/les=54/55 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.e( empty local-lis/les=54/55 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.d( empty local-lis/les=54/55 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.5( empty local-lis/les=54/55 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.c( empty local-lis/les=54/55 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.2( empty local-lis/les=54/55 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.3( empty local-lis/les=54/55 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.1e( empty local-lis/les=54/55 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.1f( empty local-lis/les=54/55 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.1c( empty local-lis/les=54/55 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.1a( empty local-lis/les=54/55 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.1b( empty local-lis/les=54/55 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.18( empty local-lis/les=54/55 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.19( empty local-lis/les=54/55 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.16( empty local-lis/les=54/55 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.17( empty local-lis/les=54/55 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.14( empty local-lis/les=54/55 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.1( empty local-lis/les=54/55 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.1d( empty local-lis/les=54/55 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:33 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v62: 353 pgs: 1 peering, 31 unknown, 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec  1 04:51:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:33 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:33 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Dec  1 04:51:33 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Dec  1 04:51:33 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  1 04:51:34 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.13( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:34 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.4( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:34 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.11( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:34 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.12( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:34 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.10( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:34 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.6( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:34 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.7( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:34 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.9( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:34 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.8( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:34 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.a( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:34 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.b( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:34 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.f( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:34 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.15( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:34 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.d( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:34 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.2( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:34 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.0( empty local-lis/les=64/65 n=0 ec=54/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:34 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.5( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:34 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.1b( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:34 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.1e( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:34 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.3( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:34 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.e( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:34 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.1a( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:34 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.c( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:34 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.18( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:34 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.19( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:34 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.16( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:34 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.14( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:34 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.17( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:34 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.1( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:34 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.1d( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:34 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.1c( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:34 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 65 pg[12.1f( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=54/54 les/c/f=55/55/0 sis=64) [1] r=0 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:34 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad340025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:34 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:34 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  1 04:51:34 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:34 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec  1 04:51:34 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Dec  1 04:51:34 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Dec  1 04:51:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:34 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20002000 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:35 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/keepalived_password}] v 0)
Dec  1 04:51:35 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:35 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  1 04:51:35 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  1 04:51:35 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  1 04:51:35 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  1 04:51:35 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  1 04:51:35 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  1 04:51:35 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-1.wzwqmm on compute-1
Dec  1 04:51:35 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-1.wzwqmm on compute-1
Dec  1 04:51:35 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v63: 353 pgs: 1 peering, 31 unknown, 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec  1 04:51:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:35 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad100016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:35 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Dec  1 04:51:35 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Dec  1 04:51:35 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:35 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:35 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:35 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:36 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:36 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Dec  1 04:51:36 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Dec  1 04:51:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:36 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad340032d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:37 np0005540825 ceph-mon[74416]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  1 04:51:37 np0005540825 ceph-mon[74416]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  1 04:51:37 np0005540825 ceph-mon[74416]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  1 04:51:37 np0005540825 ceph-mon[74416]: Deploying daemon keepalived.nfs.cephfs.compute-1.wzwqmm on compute-1
Dec  1 04:51:37 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v64: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec  1 04:51:37 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  1 04:51:37 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  1 04:51:37 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  1 04:51:37 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  1 04:51:37 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  1 04:51:37 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  1 04:51:37 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Dec  1 04:51:37 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec  1 04:51:37 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  1 04:51:37 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  1 04:51:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:37 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad100016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:37 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Dec  1 04:51:37 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Dec  1 04:51:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Dec  1 04:51:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:38 : epoch 692d650f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 04:51:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:38 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20002000 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:38 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  1 04:51:38 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  1 04:51:38 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  1 04:51:38 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec  1 04:51:38 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  1 04:51:38 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  1 04:51:38 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  1 04:51:38 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  1 04:51:38 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec  1 04:51:38 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  1 04:51:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Dec  1 04:51:38 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[11.1a( empty local-lis/les=0/0 n=0 ec=63/52 lis/c=63/63 les/c/f=64/64/0 sis=66) [1] r=0 lpr=66 pi=[63,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[8.19( empty local-lis/les=0/0 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=66) [1] r=0 lpr=66 pi=[59,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[9.11( empty local-lis/les=0/0 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=66) [1] r=0 lpr=66 pi=[61,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[8.10( empty local-lis/les=0/0 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=66) [1] r=0 lpr=66 pi=[59,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[9.12( empty local-lis/les=0/0 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=66) [1] r=0 lpr=66 pi=[61,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[8.12( empty local-lis/les=0/0 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=66) [1] r=0 lpr=66 pi=[59,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[11.1e( empty local-lis/les=0/0 n=0 ec=63/52 lis/c=63/63 les/c/f=64/64/0 sis=66) [1] r=0 lpr=66 pi=[63,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[11.1c( empty local-lis/les=0/0 n=0 ec=63/52 lis/c=63/63 les/c/f=64/64/0 sis=66) [1] r=0 lpr=66 pi=[63,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[11.1d( empty local-lis/les=0/0 n=0 ec=63/52 lis/c=63/63 les/c/f=64/64/0 sis=66) [1] r=0 lpr=66 pi=[63,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[8.18( empty local-lis/les=0/0 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=66) [1] r=0 lpr=66 pi=[59,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[11.1b( empty local-lis/les=0/0 n=0 ec=63/52 lis/c=63/63 les/c/f=64/64/0 sis=66) [1] r=0 lpr=66 pi=[63,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[8.1b( empty local-lis/les=0/0 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=66) [1] r=0 lpr=66 pi=[59,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[8.4( empty local-lis/les=0/0 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=66) [1] r=0 lpr=66 pi=[59,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[11.7( empty local-lis/les=0/0 n=0 ec=63/52 lis/c=63/63 les/c/f=64/64/0 sis=66) [1] r=0 lpr=66 pi=[63,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[9.6( empty local-lis/les=0/0 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=66) [1] r=0 lpr=66 pi=[61,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[11.4( empty local-lis/les=0/0 n=0 ec=63/52 lis/c=63/63 les/c/f=64/64/0 sis=66) [1] r=0 lpr=66 pi=[63,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[11.5( empty local-lis/les=0/0 n=0 ec=63/52 lis/c=63/63 les/c/f=64/64/0 sis=66) [1] r=0 lpr=66 pi=[63,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[9.e( empty local-lis/les=0/0 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=66) [1] r=0 lpr=66 pi=[61,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[9.a( empty local-lis/les=0/0 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=66) [1] r=0 lpr=66 pi=[61,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[8.8( empty local-lis/les=0/0 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=66) [1] r=0 lpr=66 pi=[59,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=66) [1] r=0 lpr=66 pi=[61,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[11.f( empty local-lis/les=0/0 n=0 ec=63/52 lis/c=63/63 les/c/f=64/64/0 sis=66) [1] r=0 lpr=66 pi=[63,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[9.d( empty local-lis/les=0/0 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=66) [1] r=0 lpr=66 pi=[61,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[11.1( empty local-lis/les=0/0 n=0 ec=63/52 lis/c=63/63 les/c/f=64/64/0 sis=66) [1] r=0 lpr=66 pi=[63,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[11.12( empty local-lis/les=0/0 n=0 ec=63/52 lis/c=63/63 les/c/f=64/64/0 sis=66) [1] r=0 lpr=66 pi=[63,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[9.10( empty local-lis/les=0/0 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=66) [1] r=0 lpr=66 pi=[61,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[11.14( empty local-lis/les=0/0 n=0 ec=63/52 lis/c=63/63 les/c/f=64/64/0 sis=66) [1] r=0 lpr=66 pi=[63,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[8.17( empty local-lis/les=0/0 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=66) [1] r=0 lpr=66 pi=[59,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[8.14( empty local-lis/les=0/0 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=66) [1] r=0 lpr=66 pi=[59,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[9.15( empty local-lis/les=0/0 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=66) [1] r=0 lpr=66 pi=[61,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[10.17( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66 pruub=13.493083954s) [2] r=-1 lpr=66 pi=[61,66)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 204.498092651s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[10.17( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66 pruub=13.493050575s) [2] r=-1 lpr=66 pi=[61,66)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.498092651s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.10( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.463691711s) [0] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 active pruub 202.468872070s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.10( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.463665009s) [0] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 202.468872070s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[10.15( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66 pruub=13.492637634s) [2] r=-1 lpr=66 pi=[61,66)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 204.498184204s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.11( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.463072777s) [2] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 active pruub 202.468719482s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[10.15( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66 pruub=13.492614746s) [2] r=-1 lpr=66 pi=[61,66)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.498184204s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.11( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.463030815s) [2] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 202.468719482s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[10.13( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66 pruub=13.492224693s) [2] r=-1 lpr=66 pi=[61,66)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 204.498077393s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.12( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.462936401s) [0] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 active pruub 202.468826294s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[10.13( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66 pruub=13.492196083s) [2] r=-1 lpr=66 pi=[61,66)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.498077393s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.12( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.462903976s) [0] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 202.468826294s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.4( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.462757111s) [2] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 active pruub 202.468704224s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.4( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.462741852s) [2] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 202.468704224s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.13( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.462543488s) [2] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 active pruub 202.468658447s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[10.1( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66 pruub=13.491848946s) [2] r=-1 lpr=66 pi=[61,66)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 204.498031616s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.6( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.462686539s) [0] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 active pruub 202.468902588s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.7( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.462679863s) [2] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 active pruub 202.468902588s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.6( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.462671280s) [0] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 202.468902588s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.13( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.462449074s) [2] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 202.468658447s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[10.1( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66 pruub=13.491822243s) [2] r=-1 lpr=66 pi=[61,66)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.498031616s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.7( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.462656021s) [2] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 202.468902588s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.9( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.462487221s) [2] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 active pruub 202.469024658s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[10.f( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66 pruub=13.491464615s) [2] r=-1 lpr=66 pi=[61,66)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 204.498077393s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[10.f( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66 pruub=13.491434097s) [2] r=-1 lpr=66 pi=[61,66)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.498077393s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.9( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.462465286s) [2] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 202.469024658s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.8( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.462379456s) [0] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 active pruub 202.469085693s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.8( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.462352753s) [0] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 202.469085693s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[10.9( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66 pruub=13.490979195s) [2] r=-1 lpr=66 pi=[61,66)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 204.497909546s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.c( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.463235855s) [0] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 active pruub 202.470184326s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.c( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.463211060s) [0] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 202.470184326s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[10.9( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66 pruub=13.490951538s) [2] r=-1 lpr=66 pi=[61,66)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.497909546s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.b( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.462266922s) [0] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 active pruub 202.469284058s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[10.d( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66 pruub=13.490038872s) [2] r=-1 lpr=66 pi=[61,66)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 204.497039795s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.b( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.462228775s) [0] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 202.469284058s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[10.d( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66 pruub=13.489976883s) [2] r=-1 lpr=66 pi=[61,66)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.497039795s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.e( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.462908745s) [0] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 active pruub 202.470123291s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.e( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.462893486s) [0] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 202.470123291s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[10.b( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66 pruub=13.490462303s) [2] r=-1 lpr=66 pi=[61,66)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 204.497833252s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[10.b( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66 pruub=13.490442276s) [2] r=-1 lpr=66 pi=[61,66)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.497833252s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[10.3( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66 pruub=13.489579201s) [2] r=-1 lpr=66 pi=[61,66)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 204.497024536s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[10.3( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66 pruub=13.489557266s) [2] r=-1 lpr=66 pi=[61,66)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.497024536s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.3( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.462264061s) [2] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 active pruub 202.469909668s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[10.5( v 62'1018 (0'0,62'1018] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66 pruub=13.489182472s) [2] r=-1 lpr=66 pi=[61,66)/1 crt=56'1015 lcod 62'1017 mlcod 62'1017 active pruub 204.497039795s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.3( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.462041855s) [2] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 202.469909668s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[10.5( v 62'1018 (0'0,62'1018] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66 pruub=13.489132881s) [2] r=-1 lpr=66 pi=[61,66)/1 crt=56'1015 lcod 62'1017 mlcod 0'0 unknown NOTIFY pruub 204.497039795s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.2( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.461738586s) [2] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 active pruub 202.469726562s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.2( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.461716652s) [2] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 202.469726562s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.1e( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.461808205s) [2] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 active pruub 202.469894409s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.1e( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.461787224s) [2] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 202.469894409s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.a( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.460961342s) [0] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 active pruub 202.469192505s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.a( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.460925102s) [0] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 202.469192505s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[10.19( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=7 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66 pruub=13.488409042s) [2] r=-1 lpr=66 pi=[61,66)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 204.496749878s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[10.19( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=7 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66 pruub=13.488389015s) [2] r=-1 lpr=66 pi=[61,66)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.496749878s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.1c( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.461818695s) [0] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 active pruub 202.470306396s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.1c( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.461791992s) [0] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 202.470306396s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[10.1d( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66 pruub=13.487944603s) [2] r=-1 lpr=66 pi=[61,66)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 204.496673584s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.1a( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.461450577s) [2] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 active pruub 202.470123291s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[10.1d( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66 pruub=13.487915993s) [2] r=-1 lpr=66 pi=[61,66)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.496673584s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.1a( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.461361885s) [2] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 202.470123291s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.18( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.461369514s) [2] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 active pruub 202.470184326s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.18( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.461347580s) [2] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 202.470184326s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[10.1f( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66 pruub=13.487595558s) [2] r=-1 lpr=66 pi=[61,66)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 204.496551514s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[10.1f( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66 pruub=13.487572670s) [2] r=-1 lpr=66 pi=[61,66)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.496551514s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.19( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.461088181s) [0] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 active pruub 202.470199585s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.19( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.461067200s) [0] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 202.470199585s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.17( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.461006165s) [2] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 active pruub 202.470260620s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.17( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.460984230s) [2] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 202.470260620s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[10.7( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66 pruub=13.416191101s) [2] r=-1 lpr=66 pi=[61,66)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 204.425491333s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[10.11( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66 pruub=13.488504410s) [2] r=-1 lpr=66 pi=[61,66)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 204.497848511s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[10.7( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66 pruub=13.416140556s) [2] r=-1 lpr=66 pi=[61,66)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.425491333s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[10.11( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66 pruub=13.488474846s) [2] r=-1 lpr=66 pi=[61,66)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.497848511s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[10.1b( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66 pruub=13.488592148s) [2] r=-1 lpr=66 pi=[61,66)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 204.498016357s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[10.1b( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66 pruub=13.488577843s) [2] r=-1 lpr=66 pi=[61,66)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.498016357s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.1d( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.460847855s) [2] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 active pruub 202.470291138s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 66 pg[12.1d( empty local-lis/les=64/65 n=0 ec=64/54 lis/c=64/64 les/c/f=65/65/0 sis=66 pruub=11.460808754s) [2] r=-1 lpr=66 pi=[64,66)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 202.470291138s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Dec  1 04:51:38 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Dec  1 04:51:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:38 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14002f50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Dec  1 04:51:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Dec  1 04:51:39 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[10.1b( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[10.1f( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[10.1b( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[10.1f( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[10.7( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[10.7( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[10.11( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[10.19( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=7 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[10.19( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=7 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[10.1d( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[10.1d( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[10.11( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[10.5( v 62'1018 (0'0,62'1018] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 62'1017 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[10.5( v 62'1018 (0'0,62'1018] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 62'1017 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[10.3( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[10.b( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[10.3( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[10.b( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[10.d( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[10.d( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[10.9( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[10.9( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[10.f( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[10.f( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[10.1( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[10.1( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[10.13( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[10.13( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[10.15( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[10.17( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[10.15( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[10.17( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[9.15( v 49'6 (0'0,49'6] local-lis/les=66/67 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=66) [1] r=0 lpr=66 pi=[61,66)/1 crt=49'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[8.14( v 57'44 (0'0,57'44] local-lis/les=66/67 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=66) [1] r=0 lpr=66 pi=[59,66)/1 crt=57'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[11.14( v 64'51 lc 53'45 (0'0,64'51] local-lis/les=66/67 n=0 ec=63/52 lis/c=63/63 les/c/f=64/64/0 sis=66) [1] r=0 lpr=66 pi=[63,66)/1 crt=64'51 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[9.10( v 49'6 (0'0,49'6] local-lis/les=66/67 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=66) [1] r=0 lpr=66 pi=[61,66)/1 crt=49'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[11.12( v 53'48 (0'0,53'48] local-lis/les=66/67 n=0 ec=63/52 lis/c=63/63 les/c/f=64/64/0 sis=66) [1] r=0 lpr=66 pi=[63,66)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[8.17( v 57'44 (0'0,57'44] local-lis/les=66/67 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=66) [1] r=0 lpr=66 pi=[59,66)/1 crt=57'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[11.1( v 53'48 (0'0,53'48] local-lis/les=66/67 n=1 ec=63/52 lis/c=63/63 les/c/f=64/64/0 sis=66) [1] r=0 lpr=66 pi=[63,66)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[9.d( v 49'6 (0'0,49'6] local-lis/les=66/67 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=66) [1] r=0 lpr=66 pi=[61,66)/1 crt=49'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[11.f( v 53'48 (0'0,53'48] local-lis/les=66/67 n=0 ec=63/52 lis/c=63/63 les/c/f=64/64/0 sis=66) [1] r=0 lpr=66 pi=[63,66)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[8.8( v 57'44 (0'0,57'44] local-lis/les=66/67 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=66) [1] r=0 lpr=66 pi=[59,66)/1 crt=57'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[9.f( v 49'6 lc 0'0 (0'0,49'6] local-lis/les=66/67 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=66) [1] r=0 lpr=66 pi=[61,66)/1 crt=49'6 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[11.5( v 53'48 (0'0,53'48] local-lis/les=66/67 n=1 ec=63/52 lis/c=63/63 les/c/f=64/64/0 sis=66) [1] r=0 lpr=66 pi=[63,66)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[11.4( v 53'48 (0'0,53'48] local-lis/les=66/67 n=1 ec=63/52 lis/c=63/63 les/c/f=64/64/0 sis=66) [1] r=0 lpr=66 pi=[63,66)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[9.a( v 49'6 (0'0,49'6] local-lis/les=66/67 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=66) [1] r=0 lpr=66 pi=[61,66)/1 crt=49'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[9.6( v 49'6 lc 0'0 (0'0,49'6] local-lis/les=66/67 n=1 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=66) [1] r=0 lpr=66 pi=[61,66)/1 crt=49'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[8.4( v 57'44 (0'0,57'44] local-lis/les=66/67 n=1 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=66) [1] r=0 lpr=66 pi=[59,66)/1 crt=57'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[11.7( v 53'48 (0'0,53'48] local-lis/les=66/67 n=1 ec=63/52 lis/c=63/63 les/c/f=64/64/0 sis=66) [1] r=0 lpr=66 pi=[63,66)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[8.1b( v 57'44 lc 56'8 (0'0,57'44] local-lis/les=66/67 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=66) [1] r=0 lpr=66 pi=[59,66)/1 crt=57'44 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[11.1b( v 53'48 (0'0,53'48] local-lis/les=66/67 n=0 ec=63/52 lis/c=63/63 les/c/f=64/64/0 sis=66) [1] r=0 lpr=66 pi=[63,66)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[9.e( v 49'6 (0'0,49'6] local-lis/les=66/67 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=66) [1] r=0 lpr=66 pi=[61,66)/1 crt=49'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[11.1c( v 53'48 (0'0,53'48] local-lis/les=66/67 n=0 ec=63/52 lis/c=63/63 les/c/f=64/64/0 sis=66) [1] r=0 lpr=66 pi=[63,66)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[8.12( v 57'44 (0'0,57'44] local-lis/les=66/67 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=66) [1] r=0 lpr=66 pi=[59,66)/1 crt=57'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[8.18( v 57'44 lc 57'19 (0'0,57'44] local-lis/les=66/67 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=66) [1] r=0 lpr=66 pi=[59,66)/1 crt=57'44 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[9.12( v 49'6 (0'0,49'6] local-lis/les=66/67 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=66) [1] r=0 lpr=66 pi=[61,66)/1 crt=49'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[11.1d( v 53'48 (0'0,53'48] local-lis/les=66/67 n=0 ec=63/52 lis/c=63/63 les/c/f=64/64/0 sis=66) [1] r=0 lpr=66 pi=[63,66)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[11.1e( v 53'48 (0'0,53'48] local-lis/les=66/67 n=0 ec=63/52 lis/c=63/63 les/c/f=64/64/0 sis=66) [1] r=0 lpr=66 pi=[63,66)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[8.10( v 65'47 lc 61'46 (0'0,65'47] local-lis/les=66/67 n=1 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=66) [1] r=0 lpr=66 pi=[59,66)/1 crt=65'47 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[11.1a( v 53'48 (0'0,53'48] local-lis/les=66/67 n=0 ec=63/52 lis/c=63/63 les/c/f=64/64/0 sis=66) [1] r=0 lpr=66 pi=[63,66)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[8.19( v 57'44 (0'0,57'44] local-lis/les=66/67 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=66) [1] r=0 lpr=66 pi=[59,66)/1 crt=57'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:39 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 67 pg[9.11( v 49'6 (0'0,49'6] local-lis/les=66/67 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=66) [1] r=0 lpr=66 pi=[61,66)/1 crt=49'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:39 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v67: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec  1 04:51:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Dec  1 04:51:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec  1 04:51:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:39 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad340032d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:39 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  1 04:51:39 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  1 04:51:39 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  1 04:51:39 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec  1 04:51:39 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  1 04:51:40 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:40 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad10002f00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Dec  1 04:51:40 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Dec  1 04:51:40 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Dec  1 04:51:40 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec  1 04:51:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Dec  1 04:51:40 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Dec  1 04:51:40 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 68 pg[10.16( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=4 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68 pruub=11.193958282s) [0] r=-1 lpr=68 pi=[61,68)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 204.498168945s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:40 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 68 pg[10.16( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=4 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68 pruub=11.193901062s) [0] r=-1 lpr=68 pi=[61,68)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.498168945s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:40 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 68 pg[10.2( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68 pruub=11.193225861s) [0] r=-1 lpr=68 pi=[61,68)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 204.498184204s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:40 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 68 pg[10.2( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68 pruub=11.193206787s) [0] r=-1 lpr=68 pi=[61,68)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.498184204s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:40 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 68 pg[10.e( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68 pruub=11.192960739s) [0] r=-1 lpr=68 pi=[61,68)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 204.498016357s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:40 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 68 pg[10.e( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68 pruub=11.192943573s) [0] r=-1 lpr=68 pi=[61,68)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.498016357s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:40 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 68 pg[10.a( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68 pruub=11.192574501s) [0] r=-1 lpr=68 pi=[61,68)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 204.497940063s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:40 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 68 pg[10.a( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68 pruub=11.192537308s) [0] r=-1 lpr=68 pi=[61,68)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.497940063s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:40 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 68 pg[10.1a( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=4 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68 pruub=11.190204620s) [0] r=-1 lpr=68 pi=[61,68)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 204.496887207s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:40 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 68 pg[10.6( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68 pruub=11.190212250s) [0] r=-1 lpr=68 pi=[61,68)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 204.496932983s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:40 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 68 pg[10.1a( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=4 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68 pruub=11.190129280s) [0] r=-1 lpr=68 pi=[61,68)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.496887207s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:40 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 68 pg[10.6( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68 pruub=11.190138817s) [0] r=-1 lpr=68 pi=[61,68)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.496932983s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:40 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 68 pg[10.1e( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68 pruub=11.189393997s) [0] r=-1 lpr=68 pi=[61,68)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 204.496612549s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:40 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 68 pg[10.1e( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68 pruub=11.189323425s) [0] r=-1 lpr=68 pi=[61,68)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.496612549s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:40 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 68 pg[10.12( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=4 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68 pruub=11.117877960s) [0] r=-1 lpr=68 pi=[61,68)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 204.425506592s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:40 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 68 pg[10.12( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=4 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68 pruub=11.117802620s) [0] r=-1 lpr=68 pi=[61,68)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.425506592s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:40 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 68 pg[10.11( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:40 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 68 pg[10.3( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:40 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 68 pg[10.13( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:40 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 68 pg[10.1( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:40 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 68 pg[10.17( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:40 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 68 pg[10.1b( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:40 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 68 pg[10.9( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:40 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 68 pg[10.f( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:40 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 68 pg[10.7( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:40 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 68 pg[10.19( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=7 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:40 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 68 pg[10.5( v 62'1018 (0'0,62'1018] local-lis/les=67/68 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[61,67)/1 crt=62'1018 lcod 62'1017 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:40 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 68 pg[10.1f( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:40 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 68 pg[10.b( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:40 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 68 pg[10.d( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:40 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 68 pg[10.1d( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:40 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 68 pg[10.15( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[61,67)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:40 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:40 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20002000 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:40 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec  1 04:51:41 np0005540825 python3[96557]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:51:41 np0005540825 podman[96558]: 2025-12-01 09:51:41.181253364 +0000 UTC m=+0.053986966 container create 2cca59f4b6f8d764ba0f16ab5037891993074a1c078910e49c628a8802bdb983 (image=quay.io/ceph/ceph:v19, name=intelligent_villani, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  1 04:51:41 np0005540825 systemd[1]: Started libpod-conmon-2cca59f4b6f8d764ba0f16ab5037891993074a1c078910e49c628a8802bdb983.scope.
Dec  1 04:51:41 np0005540825 podman[96558]: 2025-12-01 09:51:41.159406435 +0000 UTC m=+0.032140067 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:51:41 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:51:41 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65514968e22fb3b1aea8782807acaf946b33f549cc992de22fd2d193e4537b3a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:51:41 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65514968e22fb3b1aea8782807acaf946b33f549cc992de22fd2d193e4537b3a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:51:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:41 : epoch 692d650f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 04:51:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:41 : epoch 692d650f : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 04:51:41 np0005540825 podman[96558]: 2025-12-01 09:51:41.324264826 +0000 UTC m=+0.196998448 container init 2cca59f4b6f8d764ba0f16ab5037891993074a1c078910e49c628a8802bdb983 (image=quay.io/ceph/ceph:v19, name=intelligent_villani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:51:41 np0005540825 podman[96558]: 2025-12-01 09:51:41.335447138 +0000 UTC m=+0.208180740 container start 2cca59f4b6f8d764ba0f16ab5037891993074a1c078910e49c628a8802bdb983 (image=quay.io/ceph/ceph:v19, name=intelligent_villani, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:51:41 np0005540825 podman[96558]: 2025-12-01 09:51:41.430998402 +0000 UTC m=+0.303732014 container attach 2cca59f4b6f8d764ba0f16ab5037891993074a1c078910e49c628a8802bdb983 (image=quay.io/ceph/ceph:v19, name=intelligent_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec  1 04:51:41 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v69: 353 pgs: 1 active+clean+scrubbing, 16 remapped+peering, 336 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 0 objects/s recovering
Dec  1 04:51:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:41 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14002f50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:41 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Dec  1 04:51:41 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Dec  1 04:51:41 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Dec  1 04:51:41 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 69 pg[10.17( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=5 ec=61/50 lis/c=67/61 les/c/f=68/62/0 sis=69 pruub=15.038389206s) [2] async=[2] r=-1 lpr=69 pi=[61,69)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 209.350952148s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:41 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 69 pg[10.2( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=69) [0]/[1] r=0 lpr=69 pi=[61,69)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:41 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 69 pg[10.2( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=69) [0]/[1] r=0 lpr=69 pi=[61,69)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:41 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 69 pg[10.17( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=5 ec=61/50 lis/c=67/61 les/c/f=68/62/0 sis=69 pruub=15.038001060s) [2] r=-1 lpr=69 pi=[61,69)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 209.350952148s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:41 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 69 pg[10.16( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=4 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=69) [0]/[1] r=0 lpr=69 pi=[61,69)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:41 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 69 pg[10.13( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=5 ec=61/50 lis/c=67/61 les/c/f=68/62/0 sis=69 pruub=15.037705421s) [2] async=[2] r=-1 lpr=69 pi=[61,69)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 209.350799561s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:41 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 69 pg[10.16( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=4 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=69) [0]/[1] r=0 lpr=69 pi=[61,69)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:41 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 69 pg[10.13( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=5 ec=61/50 lis/c=67/61 les/c/f=68/62/0 sis=69 pruub=15.037603378s) [2] r=-1 lpr=69 pi=[61,69)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 209.350799561s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:41 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 69 pg[10.1( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=6 ec=61/50 lis/c=67/61 les/c/f=68/62/0 sis=69 pruub=15.037549973s) [2] async=[2] r=-1 lpr=69 pi=[61,69)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 209.350906372s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:41 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 69 pg[10.1( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=6 ec=61/50 lis/c=67/61 les/c/f=68/62/0 sis=69 pruub=15.037490845s) [2] r=-1 lpr=69 pi=[61,69)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 209.350906372s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:41 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 69 pg[10.f( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=6 ec=61/50 lis/c=67/61 les/c/f=68/62/0 sis=69 pruub=15.037343979s) [2] async=[2] r=-1 lpr=69 pi=[61,69)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 209.350997925s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:41 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 69 pg[10.f( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=6 ec=61/50 lis/c=67/61 les/c/f=68/62/0 sis=69 pruub=15.037287712s) [2] r=-1 lpr=69 pi=[61,69)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 209.350997925s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:41 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 69 pg[10.e( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=69) [0]/[1] r=0 lpr=69 pi=[61,69)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:41 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 69 pg[10.e( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=69) [0]/[1] r=0 lpr=69 pi=[61,69)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:41 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 69 pg[10.a( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=69) [0]/[1] r=0 lpr=69 pi=[61,69)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:41 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 69 pg[10.a( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=69) [0]/[1] r=0 lpr=69 pi=[61,69)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:41 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 69 pg[10.9( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=6 ec=61/50 lis/c=67/61 les/c/f=68/62/0 sis=69 pruub=15.036729813s) [2] async=[2] r=-1 lpr=69 pi=[61,69)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 209.350997925s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:41 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 69 pg[10.9( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=6 ec=61/50 lis/c=67/61 les/c/f=68/62/0 sis=69 pruub=15.036659241s) [2] r=-1 lpr=69 pi=[61,69)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 209.350997925s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:41 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 69 pg[10.3( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=6 ec=61/50 lis/c=67/61 les/c/f=68/62/0 sis=69 pruub=15.036144257s) [2] async=[2] r=-1 lpr=69 pi=[61,69)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 209.350723267s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:41 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 69 pg[10.5( v 68'1022 (0'0,68'1022] local-lis/les=67/68 n=6 ec=61/50 lis/c=67/61 les/c/f=68/62/0 sis=69 pruub=15.036334038s) [2] async=[2] r=-1 lpr=69 pi=[61,69)/1 crt=62'1018 lcod 68'1021 mlcod 68'1021 active pruub 209.351104736s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:41 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 69 pg[10.5( v 68'1022 (0'0,68'1022] local-lis/les=67/68 n=6 ec=61/50 lis/c=67/61 les/c/f=68/62/0 sis=69 pruub=15.036255836s) [2] r=-1 lpr=69 pi=[61,69)/1 crt=62'1018 lcod 68'1021 mlcod 0'0 unknown NOTIFY pruub 209.351104736s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:41 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 69 pg[10.3( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=6 ec=61/50 lis/c=67/61 les/c/f=68/62/0 sis=69 pruub=15.035869598s) [2] r=-1 lpr=69 pi=[61,69)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 209.350723267s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:41 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 69 pg[10.19( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=7 ec=61/50 lis/c=67/61 les/c/f=68/62/0 sis=69 pruub=15.036031723s) [2] async=[2] r=-1 lpr=69 pi=[61,69)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 209.351043701s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:41 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 69 pg[10.1a( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=4 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=69) [0]/[1] r=0 lpr=69 pi=[61,69)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:41 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 69 pg[10.1a( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=4 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=69) [0]/[1] r=0 lpr=69 pi=[61,69)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:41 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 69 pg[10.19( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=7 ec=61/50 lis/c=67/61 les/c/f=68/62/0 sis=69 pruub=15.035966873s) [2] r=-1 lpr=69 pi=[61,69)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 209.351043701s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:41 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 69 pg[10.1e( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=69) [0]/[1] r=0 lpr=69 pi=[61,69)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:41 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 69 pg[10.1e( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=69) [0]/[1] r=0 lpr=69 pi=[61,69)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:41 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 69 pg[10.11( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=6 ec=61/50 lis/c=67/61 les/c/f=68/62/0 sis=69 pruub=15.027070045s) [2] async=[2] r=-1 lpr=69 pi=[61,69)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 209.342590332s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:41 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 69 pg[10.12( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=4 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=69) [0]/[1] r=0 lpr=69 pi=[61,69)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:41 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 69 pg[10.12( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=4 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=69) [0]/[1] r=0 lpr=69 pi=[61,69)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:41 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 69 pg[10.11( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=6 ec=61/50 lis/c=67/61 les/c/f=68/62/0 sis=69 pruub=15.027020454s) [2] r=-1 lpr=69 pi=[61,69)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 209.342590332s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:41 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 69 pg[10.6( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=69) [0]/[1] r=0 lpr=69 pi=[61,69)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:41 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 69 pg[10.6( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=69) [0]/[1] r=0 lpr=69 pi=[61,69)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:41 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 69 pg[10.1b( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=5 ec=61/50 lis/c=67/61 les/c/f=68/62/0 sis=69 pruub=15.035219193s) [2] async=[2] r=-1 lpr=69 pi=[61,69)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 209.350952148s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:41 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 69 pg[10.1b( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=5 ec=61/50 lis/c=67/61 les/c/f=68/62/0 sis=69 pruub=15.035186768s) [2] r=-1 lpr=69 pi=[61,69)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 209.350952148s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:42 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:42 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad34003fe0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:42 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  1 04:51:42 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec  1 04:51:42 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:42 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  1 04:51:42 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:42 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec  1 04:51:42 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:42 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  1 04:51:42 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  1 04:51:42 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  1 04:51:42 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  1 04:51:42 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  1 04:51:42 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  1 04:51:42 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-0.gzwexr on compute-0
Dec  1 04:51:42 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-0.gzwexr on compute-0
Dec  1 04:51:42 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Dec  1 04:51:42 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Dec  1 04:51:42 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Dec  1 04:51:42 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 70 pg[10.15( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=5 ec=61/50 lis/c=67/61 les/c/f=68/62/0 sis=70 pruub=13.996457100s) [2] async=[2] r=-1 lpr=70 pi=[61,70)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 209.351211548s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:42 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 70 pg[10.15( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=5 ec=61/50 lis/c=67/61 les/c/f=68/62/0 sis=70 pruub=13.996358871s) [2] r=-1 lpr=70 pi=[61,70)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 209.351211548s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:42 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 70 pg[10.b( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=6 ec=61/50 lis/c=67/61 les/c/f=68/62/0 sis=70 pruub=13.994365692s) [2] async=[2] r=-1 lpr=70 pi=[61,70)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 209.350799561s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:42 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 70 pg[10.b( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=6 ec=61/50 lis/c=67/61 les/c/f=68/62/0 sis=70 pruub=13.994316101s) [2] r=-1 lpr=70 pi=[61,70)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 209.350799561s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:42 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 70 pg[10.d( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=6 ec=61/50 lis/c=67/61 les/c/f=68/62/0 sis=70 pruub=13.994108200s) [2] async=[2] r=-1 lpr=70 pi=[61,70)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 209.350845337s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:42 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 70 pg[10.d( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=6 ec=61/50 lis/c=67/61 les/c/f=68/62/0 sis=70 pruub=13.993833542s) [2] r=-1 lpr=70 pi=[61,70)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 209.350845337s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:42 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 70 pg[10.1d( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=5 ec=61/50 lis/c=67/61 les/c/f=68/62/0 sis=70 pruub=13.993663788s) [2] async=[2] r=-1 lpr=70 pi=[61,70)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 209.351165771s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:42 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 70 pg[10.1d( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=5 ec=61/50 lis/c=67/61 les/c/f=68/62/0 sis=70 pruub=13.993616104s) [2] r=-1 lpr=70 pi=[61,70)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 209.351165771s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:42 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 70 pg[10.1f( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=5 ec=61/50 lis/c=67/61 les/c/f=68/62/0 sis=70 pruub=13.992972374s) [2] async=[2] r=-1 lpr=70 pi=[61,70)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 209.351104736s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:42 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 70 pg[10.7( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=5 ec=61/50 lis/c=67/61 les/c/f=68/62/0 sis=70 pruub=13.992900848s) [2] async=[2] r=-1 lpr=70 pi=[61,70)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 209.351028442s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:42 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 70 pg[10.1f( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=5 ec=61/50 lis/c=67/61 les/c/f=68/62/0 sis=70 pruub=13.992888451s) [2] r=-1 lpr=70 pi=[61,70)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 209.351104736s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:42 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 70 pg[10.7( v 56'1015 (0'0,56'1015] local-lis/les=67/68 n=5 ec=61/50 lis/c=67/61 les/c/f=68/62/0 sis=70 pruub=13.992820740s) [2] r=-1 lpr=70 pi=[61,70)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 209.351028442s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:42 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:42 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad10002f00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:43 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e70 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:51:43 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 70 pg[10.1e( v 56'1015 (0'0,56'1015] local-lis/les=69/70 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=69) [0]/[1] async=[0] r=0 lpr=69 pi=[61,69)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:43 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 70 pg[10.e( v 56'1015 (0'0,56'1015] local-lis/les=69/70 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=69) [0]/[1] async=[0] r=0 lpr=69 pi=[61,69)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:43 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 70 pg[10.1a( v 56'1015 (0'0,56'1015] local-lis/les=69/70 n=4 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=69) [0]/[1] async=[0] r=0 lpr=69 pi=[61,69)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:43 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 70 pg[10.12( v 56'1015 (0'0,56'1015] local-lis/les=69/70 n=4 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=69) [0]/[1] async=[0] r=0 lpr=69 pi=[61,69)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:43 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 70 pg[10.16( v 56'1015 (0'0,56'1015] local-lis/les=69/70 n=4 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=69) [0]/[1] async=[0] r=0 lpr=69 pi=[61,69)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:43 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 70 pg[10.6( v 56'1015 (0'0,56'1015] local-lis/les=69/70 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=69) [0]/[1] async=[0] r=0 lpr=69 pi=[61,69)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:43 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 70 pg[10.2( v 56'1015 (0'0,56'1015] local-lis/les=69/70 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=69) [0]/[1] async=[0] r=0 lpr=69 pi=[61,69)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:43 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 70 pg[10.a( v 56'1015 (0'0,56'1015] local-lis/les=69/70 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=69) [0]/[1] async=[0] r=0 lpr=69 pi=[61,69)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:43 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:43 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:43 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:43 np0005540825 ceph-mon[74416]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  1 04:51:43 np0005540825 ceph-mon[74416]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  1 04:51:43 np0005540825 ceph-mon[74416]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  1 04:51:43 np0005540825 ceph-mon[74416]: Deploying daemon keepalived.nfs.cephfs.compute-0.gzwexr on compute-0
Dec  1 04:51:43 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v72: 353 pgs: 10 peering, 1 active+clean+scrubbing, 6 remapped+peering, 336 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 566 B/s, 17 objects/s recovering
Dec  1 04:51:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:43 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20002000 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:43 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Dec  1 04:51:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:44 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14002f50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:44 : epoch 692d650f : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  1 04:51:44 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 12.15 deep-scrub starts
Dec  1 04:51:44 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 12.15 deep-scrub ok
Dec  1 04:51:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:44 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad34003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Dec  1 04:51:45 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Dec  1 04:51:45 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v74: 353 pgs: 10 peering, 1 active+clean+scrubbing, 6 remapped+peering, 336 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 216 B/s rd, 216 B/s wr, 0 op/s; 475 B/s, 13 objects/s recovering
Dec  1 04:51:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:45 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad10002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:45 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 12.f scrub starts
Dec  1 04:51:45 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 71 pg[10.12( v 56'1015 (0'0,56'1015] local-lis/les=69/70 n=4 ec=61/50 lis/c=69/61 les/c/f=70/62/0 sis=71 pruub=13.582130432s) [0] async=[0] r=-1 lpr=71 pi=[61,71)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 211.897247314s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:45 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 71 pg[10.12( v 56'1015 (0'0,56'1015] local-lis/les=69/70 n=4 ec=61/50 lis/c=69/61 les/c/f=70/62/0 sis=71 pruub=13.582087517s) [0] r=-1 lpr=71 pi=[61,71)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 211.897247314s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:45 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 71 pg[10.1e( v 56'1015 (0'0,56'1015] local-lis/les=69/70 n=5 ec=61/50 lis/c=69/61 les/c/f=70/62/0 sis=71 pruub=13.578992844s) [0] async=[0] r=-1 lpr=71 pi=[61,71)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 211.894241333s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:45 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 71 pg[10.1e( v 56'1015 (0'0,56'1015] local-lis/les=69/70 n=5 ec=61/50 lis/c=69/61 les/c/f=70/62/0 sis=71 pruub=13.578956604s) [0] r=-1 lpr=71 pi=[61,71)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 211.894241333s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:45 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 71 pg[10.1a( v 56'1015 (0'0,56'1015] local-lis/les=69/70 n=4 ec=61/50 lis/c=69/61 les/c/f=70/62/0 sis=71 pruub=13.579200745s) [0] async=[0] r=-1 lpr=71 pi=[61,71)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 211.894561768s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:45 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 71 pg[10.1a( v 56'1015 (0'0,56'1015] local-lis/les=69/70 n=4 ec=61/50 lis/c=69/61 les/c/f=70/62/0 sis=71 pruub=13.579176903s) [0] r=-1 lpr=71 pi=[61,71)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 211.894561768s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:45 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 71 pg[10.6( v 56'1015 (0'0,56'1015] local-lis/les=69/70 n=6 ec=61/50 lis/c=69/61 les/c/f=70/62/0 sis=71 pruub=13.581947327s) [0] async=[0] r=-1 lpr=71 pi=[61,71)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 211.897415161s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:45 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 71 pg[10.6( v 56'1015 (0'0,56'1015] local-lis/les=69/70 n=6 ec=61/50 lis/c=69/61 les/c/f=70/62/0 sis=71 pruub=13.581886292s) [0] r=-1 lpr=71 pi=[61,71)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 211.897415161s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:45 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 71 pg[10.e( v 56'1015 (0'0,56'1015] local-lis/les=69/70 n=6 ec=61/50 lis/c=69/61 les/c/f=70/62/0 sis=71 pruub=13.578944206s) [0] async=[0] r=-1 lpr=71 pi=[61,71)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 211.894561768s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:45 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 71 pg[10.e( v 56'1015 (0'0,56'1015] local-lis/les=69/70 n=6 ec=61/50 lis/c=69/61 les/c/f=70/62/0 sis=71 pruub=13.578907967s) [0] r=-1 lpr=71 pi=[61,71)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 211.894561768s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:45 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 71 pg[10.2( v 56'1015 (0'0,56'1015] local-lis/les=69/70 n=6 ec=61/50 lis/c=69/61 les/c/f=70/62/0 sis=71 pruub=13.581814766s) [0] async=[0] r=-1 lpr=71 pi=[61,71)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 211.897506714s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:45 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 71 pg[10.2( v 56'1015 (0'0,56'1015] local-lis/les=69/70 n=6 ec=61/50 lis/c=69/61 les/c/f=70/62/0 sis=71 pruub=13.581781387s) [0] r=-1 lpr=71 pi=[61,71)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 211.897506714s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:45 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 71 pg[10.16( v 56'1015 (0'0,56'1015] local-lis/les=69/70 n=4 ec=61/50 lis/c=69/61 les/c/f=70/62/0 sis=71 pruub=13.581510544s) [0] async=[0] r=-1 lpr=71 pi=[61,71)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 211.897293091s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:45 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 71 pg[10.16( v 56'1015 (0'0,56'1015] local-lis/les=69/70 n=4 ec=61/50 lis/c=69/61 les/c/f=70/62/0 sis=71 pruub=13.581464767s) [0] r=-1 lpr=71 pi=[61,71)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 211.897293091s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:45 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 71 pg[10.a( v 56'1015 (0'0,56'1015] local-lis/les=69/70 n=6 ec=61/50 lis/c=69/61 les/c/f=70/62/0 sis=71 pruub=13.581590652s) [0] async=[0] r=-1 lpr=71 pi=[61,71)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 211.897598267s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:45 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 71 pg[10.a( v 56'1015 (0'0,56'1015] local-lis/les=69/70 n=6 ec=61/50 lis/c=69/61 les/c/f=70/62/0 sis=71 pruub=13.581554413s) [0] r=-1 lpr=71 pi=[61,71)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 211.897598267s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:45 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 12.f scrub ok
Dec  1 04:51:46 np0005540825 intelligent_villani[96573]: could not fetch user info: no user info saved
Dec  1 04:51:46 np0005540825 systemd[1]: libpod-2cca59f4b6f8d764ba0f16ab5037891993074a1c078910e49c628a8802bdb983.scope: Deactivated successfully.
Dec  1 04:51:46 np0005540825 podman[96558]: 2025-12-01 09:51:46.117757093 +0000 UTC m=+4.990490705 container died 2cca59f4b6f8d764ba0f16ab5037891993074a1c078910e49c628a8802bdb983 (image=quay.io/ceph/ceph:v19, name=intelligent_villani, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:51:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:46 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20002000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:46 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Dec  1 04:51:46 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 12.d scrub starts
Dec  1 04:51:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:46 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:47 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 12.d scrub ok
Dec  1 04:51:47 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Dec  1 04:51:47 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Dec  1 04:51:47 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v76: 353 pgs: 8 peering, 345 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 528 B/s, 14 objects/s recovering
Dec  1 04:51:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:47 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad34003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:47 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 12.5 scrub starts
Dec  1 04:51:47 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 12.5 scrub ok
Dec  1 04:51:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:48 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad10002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:48 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:51:48 np0005540825 systemd[1]: var-lib-containers-storage-overlay-65514968e22fb3b1aea8782807acaf946b33f549cc992de22fd2d193e4537b3a-merged.mount: Deactivated successfully.
Dec  1 04:51:48 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 12.0 scrub starts
Dec  1 04:51:48 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 12.0 scrub ok
Dec  1 04:51:48 np0005540825 podman[96558]: 2025-12-01 09:51:48.934732593 +0000 UTC m=+7.807466205 container remove 2cca59f4b6f8d764ba0f16ab5037891993074a1c078910e49c628a8802bdb983 (image=quay.io/ceph/ceph:v19, name=intelligent_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:51:48 np0005540825 systemd[1]: libpod-conmon-2cca59f4b6f8d764ba0f16ab5037891993074a1c078910e49c628a8802bdb983.scope: Deactivated successfully.
Dec  1 04:51:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:48 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20002000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:48 np0005540825 podman[96744]: 2025-12-01 09:51:48.991348268 +0000 UTC m=+5.799101899 container create b740a0d1608fdedd641a7879e609c69be429447dcf505f602346dd5594a9dc76 (image=quay.io/ceph/keepalived:2.2.4, name=blissful_chatterjee, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, description=keepalived for Ceph, release=1793, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, com.redhat.component=keepalived-container, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  1 04:51:49 np0005540825 systemd[1]: Started libpod-conmon-b740a0d1608fdedd641a7879e609c69be429447dcf505f602346dd5594a9dc76.scope.
Dec  1 04:51:49 np0005540825 podman[96744]: 2025-12-01 09:51:48.978626196 +0000 UTC m=+5.786379837 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec  1 04:51:49 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:51:49 np0005540825 python3[96889]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 365f19c2-81e5-5edd-b6b4-280555214d3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:51:49 np0005540825 podman[96744]: 2025-12-01 09:51:49.325125191 +0000 UTC m=+6.132878832 container init b740a0d1608fdedd641a7879e609c69be429447dcf505f602346dd5594a9dc76 (image=quay.io/ceph/keepalived:2.2.4, name=blissful_chatterjee, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, io.openshift.expose-services=, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git)
Dec  1 04:51:49 np0005540825 podman[96744]: 2025-12-01 09:51:49.333904317 +0000 UTC m=+6.141657938 container start b740a0d1608fdedd641a7879e609c69be429447dcf505f602346dd5594a9dc76 (image=quay.io/ceph/keepalived:2.2.4, name=blissful_chatterjee, io.buildah.version=1.28.2, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, description=keepalived for Ceph, vcs-type=git, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64)
Dec  1 04:51:49 np0005540825 podman[96744]: 2025-12-01 09:51:49.337551985 +0000 UTC m=+6.145305636 container attach b740a0d1608fdedd641a7879e609c69be429447dcf505f602346dd5594a9dc76 (image=quay.io/ceph/keepalived:2.2.4, name=blissful_chatterjee, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, distribution-scope=public, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, vcs-type=git, architecture=x86_64)
Dec  1 04:51:49 np0005540825 blissful_chatterjee[96861]: 0 0
Dec  1 04:51:49 np0005540825 systemd[1]: libpod-b740a0d1608fdedd641a7879e609c69be429447dcf505f602346dd5594a9dc76.scope: Deactivated successfully.
Dec  1 04:51:49 np0005540825 podman[96744]: 2025-12-01 09:51:49.342002705 +0000 UTC m=+6.149756346 container died b740a0d1608fdedd641a7879e609c69be429447dcf505f602346dd5594a9dc76 (image=quay.io/ceph/keepalived:2.2.4, name=blissful_chatterjee, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, io.openshift.expose-services=, com.redhat.component=keepalived-container, description=keepalived for Ceph, distribution-scope=public, name=keepalived)
Dec  1 04:51:49 np0005540825 podman[96890]: 2025-12-01 09:51:49.342138079 +0000 UTC m=+0.032124027 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:51:49 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v77: 353 pgs: 8 peering, 345 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 115 B/s, 2 objects/s recovering
Dec  1 04:51:49 np0005540825 podman[96890]: 2025-12-01 09:51:49.597824237 +0000 UTC m=+0.287810115 container create 1f2f6a5527b0fd490c919b86853294a948e57615a29240acae299bbbd3951d2a (image=quay.io/ceph/ceph:v19, name=distracted_edison, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  1 04:51:49 np0005540825 systemd[1]: var-lib-containers-storage-overlay-691b5f25b8119c3fa8e398608feca0829f39f07eb945f8a3f329822d94142b6d-merged.mount: Deactivated successfully.
Dec  1 04:51:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:49 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:49 np0005540825 podman[96744]: 2025-12-01 09:51:49.623642223 +0000 UTC m=+6.431395884 container remove b740a0d1608fdedd641a7879e609c69be429447dcf505f602346dd5594a9dc76 (image=quay.io/ceph/keepalived:2.2.4, name=blissful_chatterjee, com.redhat.component=keepalived-container, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, distribution-scope=public, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=Ceph keepalived, name=keepalived, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Dec  1 04:51:49 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:51:49 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:51:49 np0005540825 systemd[1]: Started libpod-conmon-1f2f6a5527b0fd490c919b86853294a948e57615a29240acae299bbbd3951d2a.scope.
Dec  1 04:51:49 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:51:49 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:51:49 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:51:49 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:51:49 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:51:49 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4bfa654c85a4bb7c6f369689a7c8c03930246ea2d6bbd6a7c88b2a41939c016/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:51:49 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4bfa654c85a4bb7c6f369689a7c8c03930246ea2d6bbd6a7c88b2a41939c016/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:51:49 np0005540825 systemd[1]: Reloading.
Dec  1 04:51:49 np0005540825 podman[96890]: 2025-12-01 09:51:49.687875103 +0000 UTC m=+0.377860991 container init 1f2f6a5527b0fd490c919b86853294a948e57615a29240acae299bbbd3951d2a (image=quay.io/ceph/ceph:v19, name=distracted_edison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Dec  1 04:51:49 np0005540825 podman[96890]: 2025-12-01 09:51:49.695446817 +0000 UTC m=+0.385432695 container start 1f2f6a5527b0fd490c919b86853294a948e57615a29240acae299bbbd3951d2a (image=quay.io/ceph/ceph:v19, name=distracted_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  1 04:51:49 np0005540825 podman[96890]: 2025-12-01 09:51:49.699784754 +0000 UTC m=+0.389770672 container attach 1f2f6a5527b0fd490c919b86853294a948e57615a29240acae299bbbd3951d2a (image=quay.io/ceph/ceph:v19, name=distracted_edison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:51:49 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:51:49 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:51:49 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 12.1f scrub starts
Dec  1 04:51:49 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 12.1f scrub ok
Dec  1 04:51:49 np0005540825 systemd[1]: libpod-conmon-b740a0d1608fdedd641a7879e609c69be429447dcf505f602346dd5594a9dc76.scope: Deactivated successfully.
Dec  1 04:51:49 np0005540825 distracted_edison[96922]: {
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:    "user_id": "openstack",
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:    "display_name": "openstack",
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:    "email": "",
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:    "suspended": 0,
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:    "max_buckets": 1000,
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:    "subusers": [],
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:    "keys": [
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:        {
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:            "user": "openstack",
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:            "access_key": "V3I0Y2C1KZCR5OKMSJD2",
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:            "secret_key": "kXCDcvH5xZEknyOLFkDtNrRo6rAuaXTS59CHxxx7",
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:            "active": true,
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:            "create_date": "2025-12-01T09:51:49.929892Z"
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:        }
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:    ],
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:    "swift_keys": [],
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:    "caps": [],
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:    "op_mask": "read, write, delete",
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:    "default_placement": "",
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:    "default_storage_class": "",
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:    "placement_tags": [],
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:    "bucket_quota": {
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:        "enabled": false,
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:        "check_on_raw": false,
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:        "max_size": -1,
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:        "max_size_kb": 0,
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:        "max_objects": -1
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:    },
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:    "user_quota": {
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:        "enabled": false,
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:        "check_on_raw": false,
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:        "max_size": -1,
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:        "max_size_kb": 0,
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:        "max_objects": -1
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:    },
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:    "temp_url_keys": [],
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:    "type": "rgw",
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:    "mfa_ids": [],
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:    "account_id": "",
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:    "path": "/",
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:    "create_date": "2025-12-01T09:51:49.929506Z",
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:    "tags": [],
Dec  1 04:51:49 np0005540825 distracted_edison[96922]:    "group_ids": []
Dec  1 04:51:49 np0005540825 distracted_edison[96922]: }
Dec  1 04:51:49 np0005540825 distracted_edison[96922]: 
Dec  1 04:51:49 np0005540825 systemd[1]: Reloading.
Dec  1 04:51:50 np0005540825 podman[96890]: 2025-12-01 09:51:50.000214468 +0000 UTC m=+0.690200336 container died 1f2f6a5527b0fd490c919b86853294a948e57615a29240acae299bbbd3951d2a (image=quay.io/ceph/ceph:v19, name=distracted_edison, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Dec  1 04:51:50 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:51:50 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:51:50 np0005540825 systemd[1]: libpod-1f2f6a5527b0fd490c919b86853294a948e57615a29240acae299bbbd3951d2a.scope: Deactivated successfully.
Dec  1 04:51:50 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:50 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad34003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:50 np0005540825 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-0.gzwexr for 365f19c2-81e5-5edd-b6b4-280555214d3a...
Dec  1 04:51:50 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/095150 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  1 04:51:50 np0005540825 systemd[1]: var-lib-containers-storage-overlay-d4bfa654c85a4bb7c6f369689a7c8c03930246ea2d6bbd6a7c88b2a41939c016-merged.mount: Deactivated successfully.
Dec  1 04:51:50 np0005540825 podman[96890]: 2025-12-01 09:51:50.719470045 +0000 UTC m=+1.409455913 container remove 1f2f6a5527b0fd490c919b86853294a948e57615a29240acae299bbbd3951d2a (image=quay.io/ceph/ceph:v19, name=distracted_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:51:50 np0005540825 systemd[1]: libpod-conmon-1f2f6a5527b0fd490c919b86853294a948e57615a29240acae299bbbd3951d2a.scope: Deactivated successfully.
Dec  1 04:51:50 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 12.1b deep-scrub starts
Dec  1 04:51:50 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 12.1b deep-scrub ok
Dec  1 04:51:50 np0005540825 podman[97149]: 2025-12-01 09:51:50.937943451 +0000 UTC m=+0.053920044 container create a5bc912f6140365e8fac95a046d1f1cd854ca55aaf2d1e10454f7fa95d0346ac (image=quay.io/ceph/keepalived:2.2.4, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-nfs-cephfs-compute-0-gzwexr, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, distribution-scope=public, release=1793, description=keepalived for Ceph, name=keepalived, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container)
Dec  1 04:51:50 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:50 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad08000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:50 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b37e2b2d72a0731829143dad7984be16f1d02902d709783c148947a3eed6c111/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:51:50 np0005540825 podman[97149]: 2025-12-01 09:51:50.995765519 +0000 UTC m=+0.111742132 container init a5bc912f6140365e8fac95a046d1f1cd854ca55aaf2d1e10454f7fa95d0346ac (image=quay.io/ceph/keepalived:2.2.4, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-nfs-cephfs-compute-0-gzwexr, vcs-type=git, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=)
Dec  1 04:51:51 np0005540825 podman[97149]: 2025-12-01 09:51:51.002344856 +0000 UTC m=+0.118321449 container start a5bc912f6140365e8fac95a046d1f1cd854ca55aaf2d1e10454f7fa95d0346ac (image=quay.io/ceph/keepalived:2.2.4, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-nfs-cephfs-compute-0-gzwexr, build-date=2023-02-22T09:23:20, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release=1793, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, io.buildah.version=1.28.2)
Dec  1 04:51:51 np0005540825 podman[97149]: 2025-12-01 09:51:50.911836047 +0000 UTC m=+0.027812730 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec  1 04:51:51 np0005540825 bash[97149]: a5bc912f6140365e8fac95a046d1f1cd854ca55aaf2d1e10454f7fa95d0346ac
Dec  1 04:51:51 np0005540825 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-0.gzwexr for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 04:51:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-nfs-cephfs-compute-0-gzwexr[97165]: Mon Dec  1 09:51:51 2025: Starting Keepalived v2.2.4 (08/21,2021)
Dec  1 04:51:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-nfs-cephfs-compute-0-gzwexr[97165]: Mon Dec  1 09:51:51 2025: Running on Linux 5.14.0-642.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025 (built for Linux 5.14.0)
Dec  1 04:51:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-nfs-cephfs-compute-0-gzwexr[97165]: Mon Dec  1 09:51:51 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Dec  1 04:51:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-nfs-cephfs-compute-0-gzwexr[97165]: Mon Dec  1 09:51:51 2025: Configuration file /etc/keepalived/keepalived.conf
Dec  1 04:51:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-nfs-cephfs-compute-0-gzwexr[97165]: Mon Dec  1 09:51:51 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Dec  1 04:51:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-nfs-cephfs-compute-0-gzwexr[97165]: Mon Dec  1 09:51:51 2025: Starting VRRP child process, pid=4
Dec  1 04:51:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-nfs-cephfs-compute-0-gzwexr[97165]: Mon Dec  1 09:51:51 2025: Startup complete
Dec  1 04:51:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-nfs-cephfs-compute-0-gzwexr[97165]: Mon Dec  1 09:51:51 2025: (VI_0) Entering BACKUP STATE (init)
Dec  1 04:51:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-nfs-cephfs-compute-0-gzwexr[97165]: Mon Dec  1 09:51:51 2025: VRRP_Script(check_backend) succeeded
Dec  1 04:51:51 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:51:51 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:51 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:51:51 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:51 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec  1 04:51:51 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:51 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  1 04:51:51 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  1 04:51:51 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  1 04:51:51 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  1 04:51:51 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  1 04:51:51 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  1 04:51:51 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-2.vkgipv on compute-2
Dec  1 04:51:51 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-2.vkgipv on compute-2
Dec  1 04:51:51 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v78: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 164 B/s, 7 objects/s recovering
Dec  1 04:51:51 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Dec  1 04:51:51 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec  1 04:51:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:51 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20002000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:51 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 12.16 scrub starts
Dec  1 04:51:51 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 12.16 scrub ok
Dec  1 04:51:51 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:51 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:51 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:51 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec  1 04:51:52 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Dec  1 04:51:52 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec  1 04:51:52 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Dec  1 04:51:52 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Dec  1 04:51:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:52 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20002000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:52 np0005540825 python3[97196]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_response mode=0644 validate_certs=False force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:51:52 np0005540825 ceph-mgr[74709]: [dashboard INFO request] [192.168.122.100:52108] [GET] [200] [0.122s] [6.3K] [3cee2dde-9668-4236-9dfa-6bbb0d7a1cb2] /
Dec  1 04:51:52 np0005540825 python3[97220]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_http_response mode=0644 validate_certs=False username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER password=NOT_LOGGING_PARAMETER url_username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER url_password=NOT_LOGGING_PARAMETER force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:51:52 np0005540825 ceph-mgr[74709]: [dashboard INFO request] [192.168.122.100:52110] [GET] [200] [0.002s] [6.3K] [c5c71f4c-877e-4d33-a6cd-bcb47e131507] /
Dec  1 04:51:52 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 12.14 deep-scrub starts
Dec  1 04:51:52 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 12.14 deep-scrub ok
Dec  1 04:51:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:52 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20002000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:53 np0005540825 ceph-mon[74416]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  1 04:51:53 np0005540825 ceph-mon[74416]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  1 04:51:53 np0005540825 ceph-mon[74416]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  1 04:51:53 np0005540825 ceph-mon[74416]: Deploying daemon keepalived.nfs.cephfs.compute-2.vkgipv on compute-2
Dec  1 04:51:53 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec  1 04:51:53 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:51:53 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v80: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 164 B/s, 7 objects/s recovering
Dec  1 04:51:53 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Dec  1 04:51:53 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec  1 04:51:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:53 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad080016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:53 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 12.1 scrub starts
Dec  1 04:51:53 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 12.1 scrub ok
Dec  1 04:51:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Dec  1 04:51:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:54 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec  1 04:51:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Dec  1 04:51:54 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Dec  1 04:51:54 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 74 pg[10.14( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=74 pruub=13.484194756s) [2] r=-1 lpr=74 pi=[61,74)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 220.498458862s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:54 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 74 pg[10.14( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=74 pruub=13.484160423s) [2] r=-1 lpr=74 pi=[61,74)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.498458862s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:54 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 74 pg[10.c( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=74 pruub=13.483621597s) [2] r=-1 lpr=74 pi=[61,74)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 220.498184204s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:54 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 74 pg[10.c( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=74 pruub=13.483588219s) [2] r=-1 lpr=74 pi=[61,74)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.498184204s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:54 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 74 pg[10.4( v 72'1022 (0'0,72'1022] local-lis/les=61/62 n=10 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=74 pruub=13.482481956s) [2] r=-1 lpr=74 pi=[61,74)/1 crt=72'1022 lcod 72'1021 mlcod 72'1021 active pruub 220.497268677s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:54 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 74 pg[10.4( v 72'1022 (0'0,72'1022] local-lis/les=61/62 n=10 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=74 pruub=13.482436180s) [2] r=-1 lpr=74 pi=[61,74)/1 crt=72'1022 lcod 72'1021 mlcod 0'0 unknown NOTIFY pruub 220.497268677s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:54 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 74 pg[10.1c( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=7 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=74 pruub=13.482118607s) [2] r=-1 lpr=74 pi=[61,74)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 220.497238159s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:54 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 74 pg[10.1c( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=7 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=74 pruub=13.482095718s) [2] r=-1 lpr=74 pi=[61,74)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.497238159s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-nfs-cephfs-compute-0-gzwexr[97165]: Mon Dec  1 09:51:54 2025: (VI_0) Entering MASTER STATE
Dec  1 04:51:54 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec  1 04:51:54 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Dec  1 04:51:54 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Dec  1 04:51:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:54 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad34003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:55 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v82: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s; 68 B/s, 5 objects/s recovering
Dec  1 04:51:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Dec  1 04:51:55 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec  1 04:51:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:55 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20002000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:55 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Dec  1 04:51:55 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Dec  1 04:51:55 np0005540825 ceph-mgr[74709]: [progress INFO root] Completed event d37f4c6c-7924-4b06-8d4e-6519efd1bdf6 (Global Recovery Event) in 30 seconds
Dec  1 04:51:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:56 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad080016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:56 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Dec  1 04:51:56 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec  1 04:51:56 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec  1 04:51:56 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Dec  1 04:51:56 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Dec  1 04:51:56 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec  1 04:51:56 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Dec  1 04:51:56 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Dec  1 04:51:56 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 75 pg[10.4( v 72'1022 (0'0,72'1022] local-lis/les=61/62 n=10 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=75) [2]/[1] r=0 lpr=75 pi=[61,75)/1 crt=72'1022 lcod 72'1021 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:56 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 75 pg[10.1c( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=7 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=75) [2]/[1] r=0 lpr=75 pi=[61,75)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:56 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 75 pg[10.4( v 72'1022 (0'0,72'1022] local-lis/les=61/62 n=10 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=75) [2]/[1] r=0 lpr=75 pi=[61,75)/1 crt=72'1022 lcod 72'1021 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:56 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 75 pg[10.1c( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=7 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=75) [2]/[1] r=0 lpr=75 pi=[61,75)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:56 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 75 pg[10.14( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=75) [2]/[1] r=0 lpr=75 pi=[61,75)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:56 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 75 pg[10.14( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=75) [2]/[1] r=0 lpr=75 pi=[61,75)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:56 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 75 pg[10.c( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=75) [2]/[1] r=0 lpr=75 pi=[61,75)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:56 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 75 pg[10.c( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=75) [2]/[1] r=0 lpr=75 pi=[61,75)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:56 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:57 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v84: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec  1 04:51:57 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Dec  1 04:51:57 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec  1 04:51:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:57 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad34003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:57 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Dec  1 04:51:57 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Dec  1 04:51:57 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec  1 04:51:57 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec  1 04:51:57 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Dec  1 04:51:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec  1 04:51:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Dec  1 04:51:58 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Dec  1 04:51:58 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 76 pg[10.16( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=71/71 les/c/f=72/72/0 sis=76) [1] r=0 lpr=76 pi=[71,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:58 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 76 pg[10.e( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=71/71 les/c/f=72/72/0 sis=76) [1] r=0 lpr=76 pi=[71,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:58 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 76 pg[10.6( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=71/71 les/c/f=72/72/0 sis=76) [1] r=0 lpr=76 pi=[71,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:58 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 76 pg[10.1e( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=71/71 les/c/f=72/72/0 sis=76) [1] r=0 lpr=76 pi=[71,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:51:58 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 76 pg[10.14( v 56'1015 (0'0,56'1015] local-lis/les=75/76 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=75) [2]/[1] async=[2] r=0 lpr=75 pi=[61,75)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:58 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 76 pg[10.1c( v 56'1015 (0'0,56'1015] local-lis/les=75/76 n=7 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=75) [2]/[1] async=[2] r=0 lpr=75 pi=[61,75)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:58 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 76 pg[10.c( v 56'1015 (0'0,56'1015] local-lis/les=75/76 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=75) [2]/[1] async=[2] r=0 lpr=75 pi=[61,75)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:58 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 76 pg[10.4( v 72'1022 (0'0,72'1022] local-lis/les=75/76 n=10 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=75) [2]/[1] async=[2] r=0 lpr=75 pi=[61,75)/1 crt=72'1022 lcod 72'1021 mlcod 0'0 active+remapped mbc={255={(0+1)=10}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:51:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:58 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20002000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:51:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Dec  1 04:51:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Dec  1 04:51:58 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Dec  1 04:51:58 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 77 pg[10.16( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=71/71 les/c/f=72/72/0 sis=77) [1]/[0] r=-1 lpr=77 pi=[71,77)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:58 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 77 pg[10.16( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=71/71 les/c/f=72/72/0 sis=77) [1]/[0] r=-1 lpr=77 pi=[71,77)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:58 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 77 pg[10.14( v 56'1015 (0'0,56'1015] local-lis/les=75/76 n=5 ec=61/50 lis/c=75/61 les/c/f=76/62/0 sis=77 pruub=15.722870827s) [2] async=[2] r=-1 lpr=77 pi=[61,77)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 226.628143311s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:58 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 77 pg[10.14( v 56'1015 (0'0,56'1015] local-lis/les=75/76 n=5 ec=61/50 lis/c=75/61 les/c/f=76/62/0 sis=77 pruub=15.722768784s) [2] r=-1 lpr=77 pi=[61,77)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 226.628143311s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:58 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 77 pg[10.e( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=71/71 les/c/f=72/72/0 sis=77) [1]/[0] r=-1 lpr=77 pi=[71,77)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:58 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 77 pg[10.e( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=71/71 les/c/f=72/72/0 sis=77) [1]/[0] r=-1 lpr=77 pi=[71,77)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:58 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 77 pg[10.6( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=71/71 les/c/f=72/72/0 sis=77) [1]/[0] r=-1 lpr=77 pi=[71,77)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:58 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 77 pg[10.6( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=71/71 les/c/f=72/72/0 sis=77) [1]/[0] r=-1 lpr=77 pi=[71,77)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:58 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 77 pg[10.1e( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=71/71 les/c/f=72/72/0 sis=77) [1]/[0] r=-1 lpr=77 pi=[71,77)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:58 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 77 pg[10.1e( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=71/71 les/c/f=72/72/0 sis=77) [1]/[0] r=-1 lpr=77 pi=[71,77)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  1 04:51:58 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 9.d scrub starts
Dec  1 04:51:58 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 9.d scrub ok
Dec  1 04:51:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  1 04:51:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:58 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad080016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:59 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec  1 04:51:59 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec  1 04:51:59 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:59 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:59 np0005540825 ceph-mgr[74709]: [progress INFO root] complete: finished ev e3f3bcef-0ef6-4f1a-8ef4-781e47f84427 (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Dec  1 04:51:59 np0005540825 ceph-mgr[74709]: [progress INFO root] Completed event e3f3bcef-0ef6-4f1a-8ef4-781e47f84427 (Updating ingress.nfs.cephfs deployment (+6 -> 6)) in 47 seconds
Dec  1 04:51:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec  1 04:51:59 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:51:59 np0005540825 ceph-mgr[74709]: [progress INFO root] update: starting ev abddad0f-8e1d-4914-8513-ed18cf7e37c9 (Updating alertmanager deployment (+1 -> 1))
Dec  1 04:51:59 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Deploying daemon alertmanager.compute-0 on compute-0
Dec  1 04:51:59 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Deploying daemon alertmanager.compute-0 on compute-0
Dec  1 04:51:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Dec  1 04:51:59 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v87: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec  1 04:51:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:51:59 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:51:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Dec  1 04:51:59 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec  1 04:51:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Dec  1 04:51:59 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Dec  1 04:51:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 78 pg[10.c( v 56'1015 (0'0,56'1015] local-lis/les=75/76 n=6 ec=61/50 lis/c=75/61 les/c/f=76/62/0 sis=78 pruub=14.512928963s) [2] async=[2] r=-1 lpr=78 pi=[61,78)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 226.641799927s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 78 pg[10.4( v 76'1027 (0'0,76'1027] local-lis/les=75/76 n=10 ec=61/50 lis/c=75/61 les/c/f=76/62/0 sis=78 pruub=14.512849808s) [2] async=[2] r=-1 lpr=78 pi=[61,78)/1 crt=72'1022 lcod 76'1026 mlcod 76'1026 active pruub 226.641845703s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 78 pg[10.4( v 76'1027 (0'0,76'1027] local-lis/les=75/76 n=10 ec=61/50 lis/c=75/61 les/c/f=76/62/0 sis=78 pruub=14.512746811s) [2] r=-1 lpr=78 pi=[61,78)/1 crt=72'1022 lcod 76'1026 mlcod 0'0 unknown NOTIFY pruub 226.641845703s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 78 pg[10.c( v 56'1015 (0'0,56'1015] local-lis/les=75/76 n=6 ec=61/50 lis/c=75/61 les/c/f=76/62/0 sis=78 pruub=14.512641907s) [2] r=-1 lpr=78 pi=[61,78)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 226.641799927s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:51:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 78 pg[10.1c( v 56'1015 (0'0,56'1015] local-lis/les=75/76 n=7 ec=61/50 lis/c=75/61 les/c/f=76/62/0 sis=78 pruub=14.512475967s) [2] async=[2] r=-1 lpr=78 pi=[61,78)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 226.641845703s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:51:59 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 78 pg[10.1c( v 56'1015 (0'0,56'1015] local-lis/les=75/76 n=7 ec=61/50 lis/c=75/61 les/c/f=76/62/0 sis=78 pruub=14.512425423s) [2] r=-1 lpr=78 pi=[61,78)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 226.641845703s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:52:00 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:00 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:00 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:00 np0005540825 ceph-mon[74416]: Deploying daemon alertmanager.compute-0 on compute-0
Dec  1 04:52:00 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec  1 04:52:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:00 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad34003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Dec  1 04:52:00 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec  1 04:52:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Dec  1 04:52:00 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Dec  1 04:52:00 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 79 pg[10.7( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=79) [1] r=0 lpr=79 pi=[70,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:52:00 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 79 pg[10.1f( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=79) [1] r=0 lpr=79 pi=[70,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:52:00 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 79 pg[10.6( v 56'1015 (0'0,56'1015] local-lis/les=0/0 n=6 ec=61/50 lis/c=77/71 les/c/f=78/72/0 sis=79) [1] r=0 lpr=79 pi=[71,79)/1 luod=0'0 crt=56'1015 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:00 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 79 pg[10.6( v 56'1015 (0'0,56'1015] local-lis/les=0/0 n=6 ec=61/50 lis/c=77/71 les/c/f=78/72/0 sis=79) [1] r=0 lpr=79 pi=[71,79)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:52:00 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 79 pg[10.1e( v 56'1015 (0'0,56'1015] local-lis/les=0/0 n=5 ec=61/50 lis/c=77/71 les/c/f=78/72/0 sis=79) [1] r=0 lpr=79 pi=[71,79)/1 luod=0'0 crt=56'1015 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:00 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 79 pg[10.1e( v 56'1015 (0'0,56'1015] local-lis/les=0/0 n=5 ec=61/50 lis/c=77/71 les/c/f=78/72/0 sis=79) [1] r=0 lpr=79 pi=[71,79)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:52:00 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 79 pg[10.f( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=69/69 les/c/f=70/70/0 sis=79) [1] r=0 lpr=79 pi=[69,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:52:00 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 79 pg[10.16( v 56'1015 (0'0,56'1015] local-lis/les=0/0 n=4 ec=61/50 lis/c=77/71 les/c/f=78/72/0 sis=79) [1] r=0 lpr=79 pi=[71,79)/1 luod=0'0 crt=56'1015 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:00 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 79 pg[10.e( v 56'1015 (0'0,56'1015] local-lis/les=0/0 n=6 ec=61/50 lis/c=77/71 les/c/f=78/72/0 sis=79) [1] r=0 lpr=79 pi=[71,79)/1 luod=0'0 crt=56'1015 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:00 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 79 pg[10.e( v 56'1015 (0'0,56'1015] local-lis/les=0/0 n=6 ec=61/50 lis/c=77/71 les/c/f=78/72/0 sis=79) [1] r=0 lpr=79 pi=[71,79)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:52:00 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 79 pg[10.16( v 56'1015 (0'0,56'1015] local-lis/les=0/0 n=4 ec=61/50 lis/c=77/71 les/c/f=78/72/0 sis=79) [1] r=0 lpr=79 pi=[71,79)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:52:00 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 79 pg[10.17( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=69/69 les/c/f=70/70/0 sis=79) [1] r=0 lpr=79 pi=[69,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:52:00 np0005540825 ceph-mgr[74709]: [progress INFO root] Writing back 25 completed events
Dec  1 04:52:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  1 04:52:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:00 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20002000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:00 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:00 np0005540825 ceph-mgr[74709]: [progress WARNING root] Starting Global Recovery Event,8 pgs not in active + clean state
Dec  1 04:52:01 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec  1 04:52:01 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:01 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v90: 353 pgs: 4 active+remapped, 4 peering, 345 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 203 B/s, 11 objects/s recovering
Dec  1 04:52:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:01 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad08002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:01 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Dec  1 04:52:01 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Dec  1 04:52:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Dec  1 04:52:02 np0005540825 podman[97315]: 2025-12-01 09:52:02.000910349 +0000 UTC m=+2.228032135 volume create f1e640c0decc5caf8d3e07f3ee02277ec664a97d9f12f90649cda979cb28f471
Dec  1 04:52:02 np0005540825 podman[97315]: 2025-12-01 09:52:01.983016157 +0000 UTC m=+2.210137993 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec  1 04:52:02 np0005540825 podman[97315]: 2025-12-01 09:52:02.008979147 +0000 UTC m=+2.236100933 container create be15d2df732a912eb00b57633162597b6f1892886332c541a3c736617ab72e62 (image=quay.io/prometheus/alertmanager:v0.25.0, name=silly_ramanujan, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:52:02 np0005540825 systemd[1]: Started libpod-conmon-be15d2df732a912eb00b57633162597b6f1892886332c541a3c736617ab72e62.scope.
Dec  1 04:52:02 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:52:02 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9443de03df4d5b26f1694cd58ecf7765f25ed50ee6b0d64a10c4080e2fdb1d5f/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec  1 04:52:02 np0005540825 podman[97315]: 2025-12-01 09:52:02.104672865 +0000 UTC m=+2.331794661 container init be15d2df732a912eb00b57633162597b6f1892886332c541a3c736617ab72e62 (image=quay.io/prometheus/alertmanager:v0.25.0, name=silly_ramanujan, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:52:02 np0005540825 podman[97315]: 2025-12-01 09:52:02.112418744 +0000 UTC m=+2.339540530 container start be15d2df732a912eb00b57633162597b6f1892886332c541a3c736617ab72e62 (image=quay.io/prometheus/alertmanager:v0.25.0, name=silly_ramanujan, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:52:02 np0005540825 silly_ramanujan[97448]: 65534 65534
Dec  1 04:52:02 np0005540825 systemd[1]: libpod-be15d2df732a912eb00b57633162597b6f1892886332c541a3c736617ab72e62.scope: Deactivated successfully.
Dec  1 04:52:02 np0005540825 podman[97315]: 2025-12-01 09:52:02.116779991 +0000 UTC m=+2.343901787 container attach be15d2df732a912eb00b57633162597b6f1892886332c541a3c736617ab72e62 (image=quay.io/prometheus/alertmanager:v0.25.0, name=silly_ramanujan, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:52:02 np0005540825 conmon[97448]: conmon be15d2df732a912eb00b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-be15d2df732a912eb00b57633162597b6f1892886332c541a3c736617ab72e62.scope/container/memory.events
Dec  1 04:52:02 np0005540825 podman[97315]: 2025-12-01 09:52:02.119101254 +0000 UTC m=+2.346223040 container died be15d2df732a912eb00b57633162597b6f1892886332c541a3c736617ab72e62 (image=quay.io/prometheus/alertmanager:v0.25.0, name=silly_ramanujan, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:52:02 np0005540825 systemd[1]: var-lib-containers-storage-overlay-9443de03df4d5b26f1694cd58ecf7765f25ed50ee6b0d64a10c4080e2fdb1d5f-merged.mount: Deactivated successfully.
Dec  1 04:52:02 np0005540825 podman[97315]: 2025-12-01 09:52:02.164565148 +0000 UTC m=+2.391686934 container remove be15d2df732a912eb00b57633162597b6f1892886332c541a3c736617ab72e62 (image=quay.io/prometheus/alertmanager:v0.25.0, name=silly_ramanujan, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:52:02 np0005540825 podman[97315]: 2025-12-01 09:52:02.170742255 +0000 UTC m=+2.397864051 volume remove f1e640c0decc5caf8d3e07f3ee02277ec664a97d9f12f90649cda979cb28f471
Dec  1 04:52:02 np0005540825 systemd[1]: libpod-conmon-be15d2df732a912eb00b57633162597b6f1892886332c541a3c736617ab72e62.scope: Deactivated successfully.
Dec  1 04:52:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:02 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:02 np0005540825 podman[97463]: 2025-12-01 09:52:02.241215373 +0000 UTC m=+0.044018696 volume create 87f575ea4e387cba2f83a6d412e1853ff7c87b1ef24e967f570dbced47f8457b
Dec  1 04:52:02 np0005540825 podman[97463]: 2025-12-01 09:52:02.253853224 +0000 UTC m=+0.056656517 container create 2f67f470f7ca8315e08542bb7ee2d6b9ff82a36a85ca3cfaf0a76309f9387aa4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=zen_knuth, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:52:02 np0005540825 systemd[1]: Started libpod-conmon-2f67f470f7ca8315e08542bb7ee2d6b9ff82a36a85ca3cfaf0a76309f9387aa4.scope.
Dec  1 04:52:02 np0005540825 podman[97463]: 2025-12-01 09:52:02.220803424 +0000 UTC m=+0.023606747 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec  1 04:52:02 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:52:02 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03d08aae4940f460b6d12a739a7a678f04f52e9d198a81722d2ce34c1dac9592/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec  1 04:52:02 np0005540825 podman[97463]: 2025-12-01 09:52:02.347794935 +0000 UTC m=+0.150598248 container init 2f67f470f7ca8315e08542bb7ee2d6b9ff82a36a85ca3cfaf0a76309f9387aa4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=zen_knuth, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:52:02 np0005540825 podman[97463]: 2025-12-01 09:52:02.353584411 +0000 UTC m=+0.156387714 container start 2f67f470f7ca8315e08542bb7ee2d6b9ff82a36a85ca3cfaf0a76309f9387aa4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=zen_knuth, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:52:02 np0005540825 zen_knuth[97482]: 65534 65534
Dec  1 04:52:02 np0005540825 systemd[1]: libpod-2f67f470f7ca8315e08542bb7ee2d6b9ff82a36a85ca3cfaf0a76309f9387aa4.scope: Deactivated successfully.
Dec  1 04:52:02 np0005540825 podman[97463]: 2025-12-01 09:52:02.357515177 +0000 UTC m=+0.160318510 container attach 2f67f470f7ca8315e08542bb7ee2d6b9ff82a36a85ca3cfaf0a76309f9387aa4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=zen_knuth, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:52:02 np0005540825 podman[97463]: 2025-12-01 09:52:02.35800798 +0000 UTC m=+0.160811273 container died 2f67f470f7ca8315e08542bb7ee2d6b9ff82a36a85ca3cfaf0a76309f9387aa4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=zen_knuth, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:52:02 np0005540825 systemd[1]: var-lib-containers-storage-overlay-03d08aae4940f460b6d12a739a7a678f04f52e9d198a81722d2ce34c1dac9592-merged.mount: Deactivated successfully.
Dec  1 04:52:02 np0005540825 podman[97463]: 2025-12-01 09:52:02.403718321 +0000 UTC m=+0.206521614 container remove 2f67f470f7ca8315e08542bb7ee2d6b9ff82a36a85ca3cfaf0a76309f9387aa4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=zen_knuth, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:52:02 np0005540825 podman[97463]: 2025-12-01 09:52:02.409256301 +0000 UTC m=+0.212059594 volume remove 87f575ea4e387cba2f83a6d412e1853ff7c87b1ef24e967f570dbced47f8457b
Dec  1 04:52:02 np0005540825 systemd[1]: libpod-conmon-2f67f470f7ca8315e08542bb7ee2d6b9ff82a36a85ca3cfaf0a76309f9387aa4.scope: Deactivated successfully.
Dec  1 04:52:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-nfs-cephfs-compute-0-gzwexr[97165]: Mon Dec  1 09:52:02 2025: (VI_0) Received advert from 192.168.122.102 with lower priority 90, ours 100, forcing new election
Dec  1 04:52:02 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Dec  1 04:52:02 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Dec  1 04:52:02 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 80 pg[10.17( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=69/69 les/c/f=70/70/0 sis=80) [1]/[2] r=-1 lpr=80 pi=[69,80)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:02 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 80 pg[10.17( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=69/69 les/c/f=70/70/0 sis=80) [1]/[2] r=-1 lpr=80 pi=[69,80)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  1 04:52:02 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 80 pg[10.f( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=69/69 les/c/f=70/70/0 sis=80) [1]/[2] r=-1 lpr=80 pi=[69,80)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:02 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 80 pg[10.f( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=69/69 les/c/f=70/70/0 sis=80) [1]/[2] r=-1 lpr=80 pi=[69,80)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  1 04:52:02 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 80 pg[10.1f( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=80) [1]/[2] r=-1 lpr=80 pi=[70,80)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:02 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 80 pg[10.1f( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=80) [1]/[2] r=-1 lpr=80 pi=[70,80)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  1 04:52:02 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 80 pg[10.7( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=80) [1]/[2] r=-1 lpr=80 pi=[70,80)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:02 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 80 pg[10.7( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=80) [1]/[2] r=-1 lpr=80 pi=[70,80)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  1 04:52:02 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 80 pg[10.e( v 56'1015 (0'0,56'1015] local-lis/les=79/80 n=6 ec=61/50 lis/c=77/71 les/c/f=78/72/0 sis=79) [1] r=0 lpr=79 pi=[71,79)/1 crt=56'1015 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:52:02 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 80 pg[10.6( v 56'1015 (0'0,56'1015] local-lis/les=79/80 n=6 ec=61/50 lis/c=77/71 les/c/f=78/72/0 sis=79) [1] r=0 lpr=79 pi=[71,79)/1 crt=56'1015 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:52:02 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 80 pg[10.16( v 56'1015 (0'0,56'1015] local-lis/les=79/80 n=4 ec=61/50 lis/c=77/71 les/c/f=78/72/0 sis=79) [1] r=0 lpr=79 pi=[71,79)/1 crt=56'1015 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:52:02 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 80 pg[10.1e( v 56'1015 (0'0,56'1015] local-lis/les=79/80 n=5 ec=61/50 lis/c=77/71 les/c/f=78/72/0 sis=79) [1] r=0 lpr=79 pi=[71,79)/1 crt=56'1015 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:52:02 np0005540825 systemd[1]: Reloading.
Dec  1 04:52:02 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:52:02 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:52:02 np0005540825 systemd[1]: Reloading.
Dec  1 04:52:02 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Dec  1 04:52:02 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Dec  1 04:52:02 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:52:02 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:52:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:02 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad34003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:03 np0005540825 systemd[1]: Starting Ceph alertmanager.compute-0 for 365f19c2-81e5-5edd-b6b4-280555214d3a...
Dec  1 04:52:03 np0005540825 podman[97626]: 2025-12-01 09:52:03.293996265 +0000 UTC m=+0.036211297 volume create 7abf4f7c201a9a09023b9e12e8b047ddf4a7274c86ac3597c2d5b0d07c7b6c6d
Dec  1 04:52:03 np0005540825 podman[97626]: 2025-12-01 09:52:03.307247382 +0000 UTC m=+0.049462414 container create 0511cb329529d79a0314faf710797871465300fa18afe5331763ee944339d662 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:52:03 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d23945a90f26ec1cb71a36d1aaf85f1b4860a553ba9333bae61fe9e515864e6/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec  1 04:52:03 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d23945a90f26ec1cb71a36d1aaf85f1b4860a553ba9333bae61fe9e515864e6/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec  1 04:52:03 np0005540825 podman[97626]: 2025-12-01 09:52:03.367818574 +0000 UTC m=+0.110033636 container init 0511cb329529d79a0314faf710797871465300fa18afe5331763ee944339d662 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:52:03 np0005540825 podman[97626]: 2025-12-01 09:52:03.372158241 +0000 UTC m=+0.114373273 container start 0511cb329529d79a0314faf710797871465300fa18afe5331763ee944339d662 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:52:03 np0005540825 podman[97626]: 2025-12-01 09:52:03.277701396 +0000 UTC m=+0.019916458 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec  1 04:52:03 np0005540825 bash[97626]: 0511cb329529d79a0314faf710797871465300fa18afe5331763ee944339d662
Dec  1 04:52:03 np0005540825 systemd[1]: Started Ceph alertmanager.compute-0 for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 04:52:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[97641]: ts=2025-12-01T09:52:03.398Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Dec  1 04:52:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[97641]: ts=2025-12-01T09:52:03.398Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Dec  1 04:52:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[97641]: ts=2025-12-01T09:52:03.407Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Dec  1 04:52:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[97641]: ts=2025-12-01T09:52:03.409Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Dec  1 04:52:03 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:52:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[97641]: ts=2025-12-01T09:52:03.444Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Dec  1 04:52:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[97641]: ts=2025-12-01T09:52:03.444Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Dec  1 04:52:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[97641]: ts=2025-12-01T09:52:03.449Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Dec  1 04:52:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[97641]: ts=2025-12-01T09:52:03.449Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Dec  1 04:52:03 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:52:03 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:03 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:52:03 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:03 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Dec  1 04:52:03 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Dec  1 04:52:03 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:03 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Dec  1 04:52:03 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:03 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Dec  1 04:52:03 np0005540825 ceph-mgr[74709]: [progress INFO root] complete: finished ev abddad0f-8e1d-4914-8513-ed18cf7e37c9 (Updating alertmanager deployment (+1 -> 1))
Dec  1 04:52:03 np0005540825 ceph-mgr[74709]: [progress INFO root] Completed event abddad0f-8e1d-4914-8513-ed18cf7e37c9 (Updating alertmanager deployment (+1 -> 1)) in 4 seconds
Dec  1 04:52:03 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Dec  1 04:52:03 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:03 np0005540825 ceph-mgr[74709]: [progress INFO root] update: starting ev aeda2b2a-3aee-41c8-ba3a-4afb1dbc0e15 (Updating grafana deployment (+1 -> 1))
Dec  1 04:52:03 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.services.monitoring] Regenerating cephadm self-signed grafana TLS certificates
Dec  1 04:52:03 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Regenerating cephadm self-signed grafana TLS certificates
Dec  1 04:52:03 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v93: 353 pgs: 1 active+clean+scrubbing, 4 active+remapped, 4 peering, 344 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 320 B/s, 17 objects/s recovering
Dec  1 04:52:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:03 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20002000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:03 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.grafana_cert}] v 0)
Dec  1 04:52:03 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:03 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.grafana_key}] v 0)
Dec  1 04:52:03 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Dec  1 04:52:03 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Dec  1 04:52:04 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"} v 0)
Dec  1 04:52:04 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Dec  1 04:52:04 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Dec  1 04:52:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_SSL_VERIFY}] v 0)
Dec  1 04:52:04 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:04 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Deploying daemon grafana.compute-0 on compute-0
Dec  1 04:52:04 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Deploying daemon grafana.compute-0 on compute-0
Dec  1 04:52:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:04 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad08002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:04 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 10.e scrub starts
Dec  1 04:52:04 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 10.e scrub ok
Dec  1 04:52:04 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:04 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:04 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:04 np0005540825 ceph-mon[74416]: Regenerating cephadm self-signed grafana TLS certificates
Dec  1 04:52:04 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:04 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:04 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Dec  1 04:52:04 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:04 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Dec  1 04:52:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Dec  1 04:52:05 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Dec  1 04:52:05 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 82 pg[10.17( v 56'1015 (0'0,56'1015] local-lis/les=0/0 n=5 ec=61/50 lis/c=80/69 les/c/f=81/70/0 sis=82) [1] r=0 lpr=82 pi=[69,82)/1 luod=0'0 crt=56'1015 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:05 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 82 pg[10.17( v 56'1015 (0'0,56'1015] local-lis/les=0/0 n=5 ec=61/50 lis/c=80/69 les/c/f=81/70/0 sis=82) [1] r=0 lpr=82 pi=[69,82)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:52:05 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 82 pg[10.f( v 56'1015 (0'0,56'1015] local-lis/les=0/0 n=7 ec=61/50 lis/c=80/69 les/c/f=81/70/0 sis=82) [1] r=0 lpr=82 pi=[69,82)/1 luod=0'0 crt=56'1015 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:05 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 82 pg[10.f( v 56'1015 (0'0,56'1015] local-lis/les=0/0 n=7 ec=61/50 lis/c=80/69 les/c/f=81/70/0 sis=82) [1] r=0 lpr=82 pi=[69,82)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:52:05 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 82 pg[10.1f( v 56'1015 (0'0,56'1015] local-lis/les=0/0 n=5 ec=61/50 lis/c=80/70 les/c/f=81/71/0 sis=82) [1] r=0 lpr=82 pi=[70,82)/1 luod=0'0 crt=56'1015 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:05 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 82 pg[10.7( v 56'1015 (0'0,56'1015] local-lis/les=0/0 n=5 ec=61/50 lis/c=80/70 les/c/f=81/71/0 sis=82) [1] r=0 lpr=82 pi=[70,82)/1 luod=0'0 crt=56'1015 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:05 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 82 pg[10.7( v 56'1015 (0'0,56'1015] local-lis/les=0/0 n=5 ec=61/50 lis/c=80/70 les/c/f=81/71/0 sis=82) [1] r=0 lpr=82 pi=[70,82)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:52:05 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 82 pg[10.1f( v 56'1015 (0'0,56'1015] local-lis/les=0/0 n=5 ec=61/50 lis/c=80/70 les/c/f=81/71/0 sis=82) [1] r=0 lpr=82 pi=[70,82)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:52:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[97641]: ts=2025-12-01T09:52:05.409Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000403672s
Dec  1 04:52:05 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v95: 353 pgs: 1 active+clean+scrubbing, 4 active+remapped, 4 peering, 344 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 99 B/s, 5 objects/s recovering
Dec  1 04:52:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:05 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad34003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:05 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 11.f scrub starts
Dec  1 04:52:05 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 11.f scrub ok
Dec  1 04:52:05 np0005540825 ceph-mon[74416]: Deploying daemon grafana.compute-0 on compute-0
Dec  1 04:52:06 np0005540825 ceph-mgr[74709]: [progress INFO root] Writing back 26 completed events
Dec  1 04:52:06 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  1 04:52:06 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Dec  1 04:52:06 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:06 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Dec  1 04:52:06 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Dec  1 04:52:06 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 83 pg[10.17( v 56'1015 (0'0,56'1015] local-lis/les=82/83 n=5 ec=61/50 lis/c=80/69 les/c/f=81/70/0 sis=82) [1] r=0 lpr=82 pi=[69,82)/1 crt=56'1015 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:52:06 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 83 pg[10.f( v 56'1015 (0'0,56'1015] local-lis/les=82/83 n=7 ec=61/50 lis/c=80/69 les/c/f=81/70/0 sis=82) [1] r=0 lpr=82 pi=[69,82)/1 crt=56'1015 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:52:06 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 83 pg[10.1f( v 56'1015 (0'0,56'1015] local-lis/les=82/83 n=5 ec=61/50 lis/c=80/70 les/c/f=81/71/0 sis=82) [1] r=0 lpr=82 pi=[70,82)/1 crt=56'1015 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:52:06 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 83 pg[10.7( v 56'1015 (0'0,56'1015] local-lis/les=82/83 n=5 ec=61/50 lis/c=80/70 les/c/f=81/71/0 sis=82) [1] r=0 lpr=82 pi=[70,82)/1 crt=56'1015 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:52:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:06 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20002000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:06 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Dec  1 04:52:06 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Dec  1 04:52:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:06 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad08002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:07 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:07 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v97: 353 pgs: 4 peering, 349 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec  1 04:52:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:07 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:07 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Dec  1 04:52:07 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Dec  1 04:52:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:08 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad34003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:08 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:52:08 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Dec  1 04:52:08 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Dec  1 04:52:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:08 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20002000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:09 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v98: 353 pgs: 4 peering, 349 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec  1 04:52:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:09 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad08003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:09 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Dec  1 04:52:09 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Dec  1 04:52:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:10 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:10 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 9.a scrub starts
Dec  1 04:52:10 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 9.a scrub ok
Dec  1 04:52:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:10 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad34003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:11 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v99: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec  1 04:52:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Dec  1 04:52:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec  1 04:52:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:11 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20002000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:11 np0005540825 podman[97751]: 2025-12-01 09:52:11.724278059 +0000 UTC m=+7.021735618 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec  1 04:52:11 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 9.e scrub starts
Dec  1 04:52:11 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 9.e scrub ok
Dec  1 04:52:11 np0005540825 podman[97751]: 2025-12-01 09:52:11.84272593 +0000 UTC m=+7.140183469 container create 19d13b271a83cc93ebb9f67f78e934e1afb9741dc0b32f1b6c5816ec21e629e9 (image=quay.io/ceph/grafana:10.4.0, name=distracted_tesla, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 04:52:11 np0005540825 systemd[1]: Started libpod-conmon-19d13b271a83cc93ebb9f67f78e934e1afb9741dc0b32f1b6c5816ec21e629e9.scope.
Dec  1 04:52:11 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:52:11 np0005540825 podman[97751]: 2025-12-01 09:52:11.950028801 +0000 UTC m=+7.247486360 container init 19d13b271a83cc93ebb9f67f78e934e1afb9741dc0b32f1b6c5816ec21e629e9 (image=quay.io/ceph/grafana:10.4.0, name=distracted_tesla, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 04:52:11 np0005540825 podman[97751]: 2025-12-01 09:52:11.959978409 +0000 UTC m=+7.257435958 container start 19d13b271a83cc93ebb9f67f78e934e1afb9741dc0b32f1b6c5816ec21e629e9 (image=quay.io/ceph/grafana:10.4.0, name=distracted_tesla, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 04:52:11 np0005540825 distracted_tesla[97993]: 472 0
Dec  1 04:52:11 np0005540825 systemd[1]: libpod-19d13b271a83cc93ebb9f67f78e934e1afb9741dc0b32f1b6c5816ec21e629e9.scope: Deactivated successfully.
Dec  1 04:52:11 np0005540825 podman[97751]: 2025-12-01 09:52:11.966360331 +0000 UTC m=+7.263817900 container attach 19d13b271a83cc93ebb9f67f78e934e1afb9741dc0b32f1b6c5816ec21e629e9 (image=quay.io/ceph/grafana:10.4.0, name=distracted_tesla, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 04:52:11 np0005540825 podman[97751]: 2025-12-01 09:52:11.967230234 +0000 UTC m=+7.264687833 container died 19d13b271a83cc93ebb9f67f78e934e1afb9741dc0b32f1b6c5816ec21e629e9 (image=quay.io/ceph/grafana:10.4.0, name=distracted_tesla, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 04:52:12 np0005540825 systemd[1]: var-lib-containers-storage-overlay-0545f46f1e178c981dc755ab3a16c04359e306b9d94fdf93d20c23a40a062c6c-merged.mount: Deactivated successfully.
Dec  1 04:52:12 np0005540825 podman[97751]: 2025-12-01 09:52:12.094285687 +0000 UTC m=+7.391743226 container remove 19d13b271a83cc93ebb9f67f78e934e1afb9741dc0b32f1b6c5816ec21e629e9 (image=quay.io/ceph/grafana:10.4.0, name=distracted_tesla, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 04:52:12 np0005540825 systemd[1]: libpod-conmon-19d13b271a83cc93ebb9f67f78e934e1afb9741dc0b32f1b6c5816ec21e629e9.scope: Deactivated successfully.
Dec  1 04:52:12 np0005540825 podman[98011]: 2025-12-01 09:52:12.183635234 +0000 UTC m=+0.058694222 container create 28223a56f54d92d7c1d45ec73d4ce77c640242e87346c6fbff5f7ea5597c98f3 (image=quay.io/ceph/grafana:10.4.0, name=wizardly_easley, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 04:52:12 np0005540825 systemd[1]: Started libpod-conmon-28223a56f54d92d7c1d45ec73d4ce77c640242e87346c6fbff5f7ea5597c98f3.scope.
Dec  1 04:52:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:12 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad08003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:12 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:52:12 np0005540825 podman[98011]: 2025-12-01 09:52:12.156902684 +0000 UTC m=+0.031961722 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec  1 04:52:12 np0005540825 podman[98011]: 2025-12-01 09:52:12.260657249 +0000 UTC m=+0.135716267 container init 28223a56f54d92d7c1d45ec73d4ce77c640242e87346c6fbff5f7ea5597c98f3 (image=quay.io/ceph/grafana:10.4.0, name=wizardly_easley, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 04:52:12 np0005540825 podman[98011]: 2025-12-01 09:52:12.27256578 +0000 UTC m=+0.147624768 container start 28223a56f54d92d7c1d45ec73d4ce77c640242e87346c6fbff5f7ea5597c98f3 (image=quay.io/ceph/grafana:10.4.0, name=wizardly_easley, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 04:52:12 np0005540825 wizardly_easley[98027]: 472 0
Dec  1 04:52:12 np0005540825 systemd[1]: libpod-28223a56f54d92d7c1d45ec73d4ce77c640242e87346c6fbff5f7ea5597c98f3.scope: Deactivated successfully.
Dec  1 04:52:12 np0005540825 podman[98011]: 2025-12-01 09:52:12.276845375 +0000 UTC m=+0.151904363 container attach 28223a56f54d92d7c1d45ec73d4ce77c640242e87346c6fbff5f7ea5597c98f3 (image=quay.io/ceph/grafana:10.4.0, name=wizardly_easley, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 04:52:12 np0005540825 podman[98011]: 2025-12-01 09:52:12.277827282 +0000 UTC m=+0.152886270 container died 28223a56f54d92d7c1d45ec73d4ce77c640242e87346c6fbff5f7ea5597c98f3 (image=quay.io/ceph/grafana:10.4.0, name=wizardly_easley, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 04:52:12 np0005540825 systemd[1]: var-lib-containers-storage-overlay-ac33016d1b67942d3dcfdabdd178e92d805e582f06805f48ce6683ce3161f641-merged.mount: Deactivated successfully.
Dec  1 04:52:12 np0005540825 podman[98011]: 2025-12-01 09:52:12.321105548 +0000 UTC m=+0.196164536 container remove 28223a56f54d92d7c1d45ec73d4ce77c640242e87346c6fbff5f7ea5597c98f3 (image=quay.io/ceph/grafana:10.4.0, name=wizardly_easley, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 04:52:12 np0005540825 systemd[1]: libpod-conmon-28223a56f54d92d7c1d45ec73d4ce77c640242e87346c6fbff5f7ea5597c98f3.scope: Deactivated successfully.
Dec  1 04:52:12 np0005540825 systemd[1]: Reloading.
Dec  1 04:52:12 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Dec  1 04:52:12 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:52:12 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:52:12 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec  1 04:52:12 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec  1 04:52:12 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Dec  1 04:52:12 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Dec  1 04:52:12 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 84 pg[10.8( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=7 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=84 pruub=11.433363914s) [0] r=-1 lpr=84 pi=[61,84)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 236.497665405s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:12 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 84 pg[10.18( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=4 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=84 pruub=11.433169365s) [0] r=-1 lpr=84 pi=[61,84)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 236.497665405s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:12 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 84 pg[10.18( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=4 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=84 pruub=11.433136940s) [0] r=-1 lpr=84 pi=[61,84)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 236.497665405s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:52:12 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 84 pg[10.8( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=7 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=84 pruub=11.432378769s) [0] r=-1 lpr=84 pi=[61,84)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 236.497665405s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:52:12 np0005540825 systemd[1]: Reloading.
Dec  1 04:52:12 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 11.1c deep-scrub starts
Dec  1 04:52:12 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 11.1c deep-scrub ok
Dec  1 04:52:12 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:52:12 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:52:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:12 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:13 np0005540825 systemd[1]: Starting Ceph grafana.compute-0 for 365f19c2-81e5-5edd-b6b4-280555214d3a...
Dec  1 04:52:13 np0005540825 podman[98173]: 2025-12-01 09:52:13.316159485 +0000 UTC m=+0.042141936 container create 6eb1185f94a74a666c6b5c09efc32bc1424dea31547c65157a432674ce35a678 (image=quay.io/ceph/grafana:10.4.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 04:52:13 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d31f7d48e723406ab9ed22fb0dfbe5a1b660448d71807f5af04abe282adb7e2/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Dec  1 04:52:13 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d31f7d48e723406ab9ed22fb0dfbe5a1b660448d71807f5af04abe282adb7e2/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Dec  1 04:52:13 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d31f7d48e723406ab9ed22fb0dfbe5a1b660448d71807f5af04abe282adb7e2/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Dec  1 04:52:13 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d31f7d48e723406ab9ed22fb0dfbe5a1b660448d71807f5af04abe282adb7e2/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Dec  1 04:52:13 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d31f7d48e723406ab9ed22fb0dfbe5a1b660448d71807f5af04abe282adb7e2/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Dec  1 04:52:13 np0005540825 podman[98173]: 2025-12-01 09:52:13.372717869 +0000 UTC m=+0.098700300 container init 6eb1185f94a74a666c6b5c09efc32bc1424dea31547c65157a432674ce35a678 (image=quay.io/ceph/grafana:10.4.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 04:52:13 np0005540825 podman[98173]: 2025-12-01 09:52:13.377402445 +0000 UTC m=+0.103384876 container start 6eb1185f94a74a666c6b5c09efc32bc1424dea31547c65157a432674ce35a678 (image=quay.io/ceph/grafana:10.4.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 04:52:13 np0005540825 bash[98173]: 6eb1185f94a74a666c6b5c09efc32bc1424dea31547c65157a432674ce35a678
Dec  1 04:52:13 np0005540825 podman[98173]: 2025-12-01 09:52:13.298670234 +0000 UTC m=+0.024652665 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec  1 04:52:13 np0005540825 systemd[1]: Started Ceph grafana.compute-0 for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[97641]: ts=2025-12-01T09:52:13.411Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.002831379s
Dec  1 04:52:13 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:52:13 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:13 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:52:13 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:52:13 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Dec  1 04:52:13 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:13 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Dec  1 04:52:13 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Dec  1 04:52:13 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Dec  1 04:52:13 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 85 pg[10.8( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=7 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=85) [0]/[1] r=0 lpr=85 pi=[61,85)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:13 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 85 pg[10.8( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=7 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=85) [0]/[1] r=0 lpr=85 pi=[61,85)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  1 04:52:13 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 85 pg[10.18( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=4 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=85) [0]/[1] r=0 lpr=85 pi=[61,85)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:13 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 85 pg[10.18( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=4 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=85) [0]/[1] r=0 lpr=85 pi=[61,85)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  1 04:52:13 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:13 np0005540825 ceph-mgr[74709]: [progress INFO root] complete: finished ev aeda2b2a-3aee-41c8-ba3a-4afb1dbc0e15 (Updating grafana deployment (+1 -> 1))
Dec  1 04:52:13 np0005540825 ceph-mgr[74709]: [progress INFO root] Completed event aeda2b2a-3aee-41c8-ba3a-4afb1dbc0e15 (Updating grafana deployment (+1 -> 1)) in 10 seconds
Dec  1 04:52:13 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=settings t=2025-12-01T09:52:13.578527303Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2025-12-01T09:52:13Z
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=settings t=2025-12-01T09:52:13.578969205Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=settings t=2025-12-01T09:52:13.579016736Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=settings t=2025-12-01T09:52:13.579046007Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=settings t=2025-12-01T09:52:13.579074948Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=settings t=2025-12-01T09:52:13.579109609Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=settings t=2025-12-01T09:52:13.57914116Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=settings t=2025-12-01T09:52:13.579178641Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=settings t=2025-12-01T09:52:13.579212082Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=settings t=2025-12-01T09:52:13.579239722Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=settings t=2025-12-01T09:52:13.579267273Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=settings t=2025-12-01T09:52:13.579296554Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=settings t=2025-12-01T09:52:13.579356546Z level=info msg=Target target=[all]
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=settings t=2025-12-01T09:52:13.579391927Z level=info msg="Path Home" path=/usr/share/grafana
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=settings t=2025-12-01T09:52:13.579422717Z level=info msg="Path Data" path=/var/lib/grafana
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=settings t=2025-12-01T09:52:13.579452588Z level=info msg="Path Logs" path=/var/log/grafana
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=settings t=2025-12-01T09:52:13.579482589Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=settings t=2025-12-01T09:52:13.57951501Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=settings t=2025-12-01T09:52:13.579547381Z level=info msg="App mode production"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=sqlstore t=2025-12-01T09:52:13.579965232Z level=info msg="Connecting to DB" dbtype=sqlite3
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=sqlstore t=2025-12-01T09:52:13.580023634Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.580891347Z level=info msg="Starting DB migrations"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.582058748Z level=info msg="Executing migration" id="create migration_log table"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.583284781Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.225093ms
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.585370178Z level=info msg="Executing migration" id="create user table"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.586260002Z level=info msg="Migration successfully executed" id="create user table" duration=889.474µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.588377389Z level=info msg="Executing migration" id="add unique index user.login"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.589027496Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=649.687µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.591348579Z level=info msg="Executing migration" id="add unique index user.email"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.592006796Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=658.057µs
Dec  1 04:52:13 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v102: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec  1 04:52:13 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Dec  1 04:52:13 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.594493253Z level=info msg="Executing migration" id="drop index UQE_user_login - v1"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.595250574Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=759.271µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.597202206Z level=info msg="Executing migration" id="drop index UQE_user_email - v1"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.597821983Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=620.767µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.599844248Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.602409467Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.56572ms
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.60439689Z level=info msg="Executing migration" id="create user table v2"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.605099929Z level=info msg="Migration successfully executed" id="create user table v2" duration=703.709µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.607010661Z level=info msg="Executing migration" id="create index UQE_user_login - v2"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.607680749Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=670.628µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.6151607Z level=info msg="Executing migration" id="create index UQE_user_email - v2"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.615839998Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=679.558µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.620864024Z level=info msg="Executing migration" id="copy data_source v1 to v2"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.621217163Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=351.389µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.623485554Z level=info msg="Executing migration" id="Drop old table user_v1"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.62405774Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=572.646µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.626347842Z level=info msg="Executing migration" id="Add column help_flags1 to user table"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.627276367Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=925.504µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.628980372Z level=info msg="Executing migration" id="Update user table charset"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.629049574Z level=info msg="Migration successfully executed" id="Update user table charset" duration=69.562µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.631057188Z level=info msg="Executing migration" id="Add last_seen_at column to user"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.632130927Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.071329ms
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.633770972Z level=info msg="Executing migration" id="Add missing user data"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.634010778Z level=info msg="Migration successfully executed" id="Add missing user data" duration=239.986µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.635863798Z level=info msg="Executing migration" id="Add is_disabled column to user"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.636822294Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=958.626µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.638711475Z level=info msg="Executing migration" id="Add index user.login/user.email"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:13 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad34003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.639398903Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=687.868µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.641471539Z level=info msg="Executing migration" id="Add is_service_account column to user"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.642471606Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.000307ms
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.64409186Z level=info msg="Executing migration" id="Update is_service_account column to nullable"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.650934484Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=6.838154ms
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.653202235Z level=info msg="Executing migration" id="Add uid column to user"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.654169231Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=967.206µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.656498264Z level=info msg="Executing migration" id="Update uid column values for users"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.656768451Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=266.597µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.658769575Z level=info msg="Executing migration" id="Add unique index user_uid"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.659398172Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=629.217µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.66303983Z level=info msg="Executing migration" id="create temp user table v1-7"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.663686668Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=646.877µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.666208235Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.666826992Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=618.617µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.669093793Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.669666689Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=573.086µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.671799866Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.672378872Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=578.736µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.674668493Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.675237579Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=569.006µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.677972782Z level=info msg="Executing migration" id="Update temp_user table charset"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.678032874Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=60.782µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.679976156Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.680603883Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=627.537µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.682039412Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.68269749Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=657.447µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.684599421Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.685212247Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=613.026µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.68754932Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.688245549Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=696.509µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.690221072Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.692829203Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=2.60656ms
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.694333143Z level=info msg="Executing migration" id="create temp_user v2"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.695005501Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=673.488µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.697062807Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.698015912Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=952.835µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.700361875Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.701229479Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=867.354µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.705207226Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.706065249Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=858.063µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.707859207Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.708733031Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=874.424µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.711192717Z level=info msg="Executing migration" id="copy temp_user v1 to v2"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.711706531Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=513.834µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.71390221Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.714596179Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=688.779µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.71648446Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.717040055Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=554.925µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.719402428Z level=info msg="Executing migration" id="create star table"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.720231641Z level=info msg="Migration successfully executed" id="create star table" duration=829.453µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.722514202Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.723479758Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=964.916µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.726005826Z level=info msg="Executing migration" id="create org table v1"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.727033964Z level=info msg="Migration successfully executed" id="create org table v1" duration=1.028268ms
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.73023549Z level=info msg="Executing migration" id="create index UQE_org_name - v1"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.731757231Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.523521ms
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.734269919Z level=info msg="Executing migration" id="create org_user table v1"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.734976268Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=706.539µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.737054314Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.737719192Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=664.278µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.740423105Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.741082722Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=659.527µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.743434666Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.744072713Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=633.257µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.748209174Z level=info msg="Executing migration" id="Update org table charset"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.748278576Z level=info msg="Migration successfully executed" id="Update org table charset" duration=69.602µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.749862849Z level=info msg="Executing migration" id="Update org_user table charset"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.749927751Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=64.962µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.751563765Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.75174371Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=180.535µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.753667232Z level=info msg="Executing migration" id="create dashboard table"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.75434969Z level=info msg="Migration successfully executed" id="create dashboard table" duration=682.349µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.756496368Z level=info msg="Executing migration" id="add index dashboard.account_id"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.757251818Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=755.05µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.759156539Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.759856368Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=699.719µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.762183041Z level=info msg="Executing migration" id="create dashboard_tag table"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.762793677Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=612.216µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.764909524Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.765603123Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=693.349µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.767926846Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.76884028Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=913.224µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.770542576Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.77512492Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=4.578653ms
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.777194255Z level=info msg="Executing migration" id="create dashboard v2"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.777959616Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=765.301µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.780064203Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.781176043Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=1.11196ms
Dec  1 04:52:13 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 8.12 deep-scrub starts
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.783631519Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.784395169Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=767.12µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.787492163Z level=info msg="Executing migration" id="copy dashboard v1 to v2"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.787876183Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=384.26µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.789501597Z level=info msg="Executing migration" id="drop table dashboard_v1"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.790913765Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.411538ms
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.792902709Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.793001641Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=99.283µs
Dec  1 04:52:13 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 8.12 deep-scrub ok
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.795057297Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.796778583Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.720436ms
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.799343272Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.800863953Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.519871ms
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.802514978Z level=info msg="Executing migration" id="Add column gnetId in dashboard"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.803996307Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.48339ms
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.80632361Z level=info msg="Executing migration" id="Add index for gnetId in dashboard"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.80706261Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=738.73µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.80967064Z level=info msg="Executing migration" id="Add column plugin_id in dashboard"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.811172781Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.502601ms
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.813172975Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard"
Dec  1 04:52:13 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.814032988Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=859.413µs
Dec  1 04:52:13 np0005540825 ceph-mgr[74709]: [progress INFO root] update: starting ev 668c2fa3-8c92-438a-a0da-986dbd0d5a14 (Updating ingress.rgw.default deployment (+4 -> 4))
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.816683519Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.817555913Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=876.444µs
Dec  1 04:52:13 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0)
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.819936577Z level=info msg="Executing migration" id="Update dashboard table charset"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.820006549Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=70.282µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.82154204Z level=info msg="Executing migration" id="Update dashboard_tag table charset"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.821607252Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=65.702µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.823434401Z level=info msg="Executing migration" id="Add column folder_id in dashboard"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.825033154Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=1.598903ms
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.826935565Z level=info msg="Executing migration" id="Add column isFolder in dashboard"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.828508178Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.572203ms
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.831524089Z level=info msg="Executing migration" id="Add column has_acl in dashboard"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.833209224Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.687505ms
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.835294951Z level=info msg="Executing migration" id="Add column uid in dashboard"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.836833242Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.537781ms
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.838518707Z level=info msg="Executing migration" id="Update uid column values in dashboard"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.838738603Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=219.946µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.840793299Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.841488847Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=695.648µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.84379445Z level=info msg="Executing migration" id="Remove unique index org_id_slug"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.844572761Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=778.211µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.847202421Z level=info msg="Executing migration" id="Update dashboard title length"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.847265203Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=63.652µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.849219746Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.849905544Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=685.168µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.856221744Z level=info msg="Executing migration" id="create dashboard_provisioning"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.85716275Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=940.976µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.859866713Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.87498336Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=15.089796ms
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.877677582Z level=info msg="Executing migration" id="create dashboard_provisioning v2"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.878544266Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=871.724µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.881478945Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.882187814Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=709.459µs
Dec  1 04:52:13 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec  1 04:52:13 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:13 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:13 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:13 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.886509359Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.88726286Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=752.911µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.890545288Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.891074862Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=531.914µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.894246638Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.895687277Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=1.446099ms
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.897907626Z level=info msg="Executing migration" id="Add check_sum column"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.900225169Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=2.315853ms
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.902454229Z level=info msg="Executing migration" id="Add index for dashboard_title"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.903447886Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=1.000197ms
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.905622674Z level=info msg="Executing migration" id="delete tags for deleted dashboards"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.905872771Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=251.477µs
Dec  1 04:52:13 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.908225834Z level=info msg="Executing migration" id="delete stars for deleted dashboards"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.908462811Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=236.567µs
Dec  1 04:52:13 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.owswdq on compute-0
Dec  1 04:52:13 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.owswdq on compute-0
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.911000619Z level=info msg="Executing migration" id="Add index for dashboard_is_folder"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.912036367Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.037648ms
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.915972503Z level=info msg="Executing migration" id="Add isPublic for dashboard"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.917904835Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=1.932842ms
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.919710514Z level=info msg="Executing migration" id="create data_source table"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.920719001Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.008537ms
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.923910507Z level=info msg="Executing migration" id="add index data_source.account_id"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.924843452Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=933.665µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.928375007Z level=info msg="Executing migration" id="add unique index data_source.account_id_name"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.929871748Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.570773ms
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.932733255Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.933645579Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=907.384µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.935385646Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.936107656Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=718.339µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.93848491Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.943619558Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=5.129679ms
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.946275519Z level=info msg="Executing migration" id="create data_source table v2"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.947541014Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.265384ms
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.949784294Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.950682388Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=897.414µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.952449996Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.953179395Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=805.441µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.95706427Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.957687977Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=623.837µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.962681301Z level=info msg="Executing migration" id="Add column with_credentials"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.964699606Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.017505ms
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.967130051Z level=info msg="Executing migration" id="Add secure json data column"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.969481025Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.352774ms
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.971259163Z level=info msg="Executing migration" id="Update data_source table charset"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.971347995Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=89.492µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.975268541Z level=info msg="Executing migration" id="Update initial version to 1"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.975546038Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=278.037µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.977813439Z level=info msg="Executing migration" id="Add read_only data column"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.979883165Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.068856ms
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.98229578Z level=info msg="Executing migration" id="Migrate logging ds to loki ds"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.982537286Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=241.846µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.984919861Z level=info msg="Executing migration" id="Update json_data with nulls"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.985124496Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=204.765µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.98788209Z level=info msg="Executing migration" id="Add uid column"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.990664815Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.788585ms
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.992633568Z level=info msg="Executing migration" id="Update uid value"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.992848394Z level=info msg="Migration successfully executed" id="Update uid value" duration=215.186µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.995480225Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.996431241Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=954.646µs
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.998464846Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default"
Dec  1 04:52:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:13.999168714Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=704.179µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.001700473Z level=info msg="Executing migration" id="create api_key table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.002473053Z level=info msg="Migration successfully executed" id="create api_key table" duration=775.66µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.005034953Z level=info msg="Executing migration" id="add index api_key.account_id"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.005743952Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=708.44µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.008468105Z level=info msg="Executing migration" id="add index api_key.key"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.009519683Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.051128ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.01271957Z level=info msg="Executing migration" id="add index api_key.account_id_name"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.01422287Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.507841ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.01644084Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.01717644Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=735.66µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.018964208Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.019727288Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=760.28µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.021842725Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.022838182Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=995.907µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.02535029Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.030158889Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=4.807949ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.031943007Z level=info msg="Executing migration" id="create api_key table v2"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.032704528Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=761.631µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.0346386Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.03536532Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=726.35µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.038399151Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.039483811Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.08775ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.041496995Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.042331027Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=834.432µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.045471102Z level=info msg="Executing migration" id="copy api_key v1 to v2"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.045998206Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=527.804µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.047940918Z level=info msg="Executing migration" id="Drop old table api_key_v1"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.048707399Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=767.391µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.050524118Z level=info msg="Executing migration" id="Update api_key table charset"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.050553249Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=30.601µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.052567353Z level=info msg="Executing migration" id="Add expires to api_key table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.054737192Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.170028ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.056624832Z level=info msg="Executing migration" id="Add service account foreign key"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.058495333Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=1.870241ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.060660751Z level=info msg="Executing migration" id="set service account foreign key to nil if 0"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.060808815Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=148.404µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.062661355Z level=info msg="Executing migration" id="Add last_used_at to api_key table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.06469802Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.036855ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.067775033Z level=info msg="Executing migration" id="Add is_revoked column to api_key table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.069918651Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.143157ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.072999083Z level=info msg="Executing migration" id="create dashboard_snapshot table v4"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.073724153Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=725.22µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.075727967Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.07621552Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=487.553µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.078153672Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.078907453Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=753.711µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.081824131Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.082487259Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=665.308µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.08476503Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.085435819Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=670.098µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.087640548Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.088323956Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=685.198µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.091260255Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.091324957Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=65.422µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.093460285Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.093480885Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=22.38µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.09550882Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.097660888Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.151688ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.10626409Z level=info msg="Executing migration" id="Add encrypted dashboard json column"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.109003183Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.742083ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.111367687Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.111421809Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=54.632µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.113520215Z level=info msg="Executing migration" id="create quota table v1"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.114219374Z level=info msg="Migration successfully executed" id="create quota table v1" duration=700.019µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.116963448Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.117742199Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=779.321µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.120235136Z level=info msg="Executing migration" id="Update quota table charset"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.120276567Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=44.891µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.122057605Z level=info msg="Executing migration" id="create plugin_setting table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.12298526Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=927.125µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.126622558Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.127584624Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=962.406µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.13041519Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.133479933Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=3.063573ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.135657332Z level=info msg="Executing migration" id="Update plugin_setting table charset"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.135686622Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=30.001µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.137538702Z level=info msg="Executing migration" id="create session table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.138613111Z level=info msg="Migration successfully executed" id="create session table" duration=1.073859ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.141230952Z level=info msg="Executing migration" id="Drop old table playlist table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.141352985Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=122.883µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.142971299Z level=info msg="Executing migration" id="Drop old table playlist_item table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.143062741Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=93.903µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.145132347Z level=info msg="Executing migration" id="create playlist table v2"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.146034811Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=902.154µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.148299122Z level=info msg="Executing migration" id="create playlist item table v2"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.149178446Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=880.384µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.15156356Z level=info msg="Executing migration" id="Update playlist table charset"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.151596411Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=34.931µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.153336138Z level=info msg="Executing migration" id="Update playlist_item table charset"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.153355638Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=21.19µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.15491589Z level=info msg="Executing migration" id="Add playlist column created_at"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.157327775Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=2.413175ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.159409501Z level=info msg="Executing migration" id="Add playlist column updated_at"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.162466104Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.054643ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.164497549Z level=info msg="Executing migration" id="drop preferences table v2"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.164599451Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=102.713µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.166685657Z level=info msg="Executing migration" id="drop preferences table v3"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.16676663Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=80.983µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.168604159Z level=info msg="Executing migration" id="create preferences table v3"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.16937712Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=771.741µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.171892518Z level=info msg="Executing migration" id="Update preferences table charset"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.171926829Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=33.981µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.173647795Z level=info msg="Executing migration" id="Add column team_id in preferences"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.176025739Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=2.374864ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.178244439Z level=info msg="Executing migration" id="Update team_id column values in preferences"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.178406863Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=162.874µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.180164421Z level=info msg="Executing migration" id="Add column week_start in preferences"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.182568535Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=2.403984ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.184407505Z level=info msg="Executing migration" id="Add column preferences.json_data"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.186629025Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=2.22135ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.188220118Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.188280469Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=60.831µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.190515969Z level=info msg="Executing migration" id="Add preferences index org_id"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.191240169Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=724.74µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.193632203Z level=info msg="Executing migration" id="Add preferences index user_id"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.194335632Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=667.808µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.196365727Z level=info msg="Executing migration" id="create alert table v1"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.19722097Z level=info msg="Migration successfully executed" id="create alert table v1" duration=854.993µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.19945104Z level=info msg="Executing migration" id="add index alert org_id & id "
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.200147839Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=696.539µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.203777237Z level=info msg="Executing migration" id="add index alert state"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.204435624Z level=info msg="Migration successfully executed" id="add index alert state" duration=658.687µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.207360553Z level=info msg="Executing migration" id="add index alert dashboard_id"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.208119394Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=758.071µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.210930499Z level=info msg="Executing migration" id="Create alert_rule_tag table v1"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.211633308Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=702.699µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.213982642Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.214743422Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=761.25µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.217074485Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.217803235Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=729.62µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.219648834Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.227210198Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=7.556524ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.229407157Z level=info msg="Executing migration" id="Create alert_rule_tag table v2"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.229990883Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=583.746µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:14 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.232973973Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.233702003Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=727.98µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.236213511Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.23655846Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=344.489µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.238587695Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.239220112Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=632.578µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.24139545Z level=info msg="Executing migration" id="create alert_notification table v1"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.24212065Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=724.76µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.244031001Z level=info msg="Executing migration" id="Add column is_default"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.246741344Z level=info msg="Migration successfully executed" id="Add column is_default" duration=2.711913ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.248859301Z level=info msg="Executing migration" id="Add column frequency"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.251483642Z level=info msg="Migration successfully executed" id="Add column frequency" duration=2.624021ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.253575148Z level=info msg="Executing migration" id="Add column send_reminder"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.256359423Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=2.783955ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.257877894Z level=info msg="Executing migration" id="Add column disable_resolve_message"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.260363681Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=2.485357ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.262207621Z level=info msg="Executing migration" id="add index alert_notification org_id & name"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.262842518Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=634.647µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.2651427Z level=info msg="Executing migration" id="Update alert table charset"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.26516076Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=18.27µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.267163214Z level=info msg="Executing migration" id="Update alert_notification table charset"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.267181885Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=17.191µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.26922179Z level=info msg="Executing migration" id="create notification_journal table v1"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.269808486Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=587.966µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.272109558Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.272791536Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=681.708µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.275069297Z level=info msg="Executing migration" id="drop alert_notification_journal"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.275812277Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=742.38µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.277810971Z level=info msg="Executing migration" id="create alert_notification_state table v1"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.278447378Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=636.157µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.280442542Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.28110344Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=660.668µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.283001391Z level=info msg="Executing migration" id="Add for to alert table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.285759005Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=2.757184ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.287443541Z level=info msg="Executing migration" id="Add column uid in alert_notification"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.290174204Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=2.731943ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.291944422Z level=info msg="Executing migration" id="Update uid column values in alert_notification"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.292078486Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=134.234µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.293778231Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.29445712Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=678.989µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.296808373Z level=info msg="Executing migration" id="Remove unique index org_id_name"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.297485191Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=676.548µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.299486755Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.302145617Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=2.658522ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.304398158Z level=info msg="Executing migration" id="alter alert.settings to mediumtext"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.304443639Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=46.692µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.306140454Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.306796622Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=655.948µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.30858258Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.309296819Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=713.809µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.312130126Z level=info msg="Executing migration" id="Drop old annotation table v4"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.312218338Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=88.792µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.314492729Z level=info msg="Executing migration" id="create annotation table v5"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.31523902Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=747.211µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.318116967Z level=info msg="Executing migration" id="add index annotation 0 v3"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.318942899Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=826.152µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.321526769Z level=info msg="Executing migration" id="add index annotation 1 v3"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.32231791Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=769.431µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.324863319Z level=info msg="Executing migration" id="add index annotation 2 v3"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.325781174Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=914.864µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.329102973Z level=info msg="Executing migration" id="add index annotation 3 v3"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.33008621Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=980.956µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.333590114Z level=info msg="Executing migration" id="add index annotation 4 v3"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.334579091Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=989.677µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.33715099Z level=info msg="Executing migration" id="Update annotation table charset"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.337175871Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=25.891µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.33939973Z level=info msg="Executing migration" id="Add column region_id to annotation table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.342610057Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=3.210557ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.345115574Z level=info msg="Executing migration" id="Drop category_id index"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.345918596Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=802.892µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.348033943Z level=info msg="Executing migration" id="Add column tags to annotation table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.351427515Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=3.386881ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.353287825Z level=info msg="Executing migration" id="Create annotation_tag table v2"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.353979553Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=693.458µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.355874304Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.356655825Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=781.471µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.359167443Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.359872842Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=705.459µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.361962188Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.371283299Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=9.317011ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.373575871Z level=info msg="Executing migration" id="Create annotation_tag table v3"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.37427493Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=700.199µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.376466099Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.377393764Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=927.965µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.380485287Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.380750795Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=265.987µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.382648146Z level=info msg="Executing migration" id="drop table annotation_tag_v2"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.383161449Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=516.123µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.384644269Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.384785993Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=141.904µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.386724275Z level=info msg="Executing migration" id="Add created time to annotation table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.389799378Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=3.073633ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.391832963Z level=info msg="Executing migration" id="Add updated time to annotation table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.394786473Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=2.95821ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.396718145Z level=info msg="Executing migration" id="Add index for created in annotation table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.397472985Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=754.17µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.400143577Z level=info msg="Executing migration" id="Add index for updated in annotation table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.40101636Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=873.123µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.403519128Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.403732444Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=213.646µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.405701977Z level=info msg="Executing migration" id="Add epoch_end column"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.410136236Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.431679ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.412345176Z level=info msg="Executing migration" id="Add index for epoch_end"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.413417615Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.073869ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.415737367Z level=info msg="Executing migration" id="Make epoch_end the same as epoch"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.415904802Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=167.555µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.418473891Z level=info msg="Executing migration" id="Move region to single row"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.41880053Z level=info msg="Migration successfully executed" id="Move region to single row" duration=326.879µs
Dec  1 04:52:14 np0005540825 podman[98299]: 2025-12-01 09:52:14.419069497 +0000 UTC m=+0.044214282 container create 376d401d1f98cc98b78d1ffa49d42f54490ee00d840a98f7c137a8a1111ee397 (image=quay.io/ceph/haproxy:2.3, name=infallible_herschel)
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.420822554Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.42176694Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=944.485µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.423700372Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.424646117Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=941.855µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.427036361Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.428059709Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.023378ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.430620928Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.431546213Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=924.965µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.43440725Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.435376086Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=968.956µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.437687498Z level=info msg="Executing migration" id="Add index for alert_id on annotation table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.438769788Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.08351ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.440950966Z level=info msg="Executing migration" id="Increase tags column to length 4096"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.441017138Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=66.642µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.443928077Z level=info msg="Executing migration" id="create test_data table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.444900953Z level=info msg="Migration successfully executed" id="create test_data table" duration=972.697µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.447182104Z level=info msg="Executing migration" id="create dashboard_version table v1"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.448092759Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=910.985µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.451024678Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.452002654Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=979.736µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.45444271Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.455293523Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=851.793µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.457740999Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.457911373Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=174.954µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.459784784Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.46075204Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=972.197µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.463054272Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.463132504Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=78.842µs
Dec  1 04:52:14 np0005540825 systemd[1]: Started libpod-conmon-376d401d1f98cc98b78d1ffa49d42f54490ee00d840a98f7c137a8a1111ee397.scope.
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.465348614Z level=info msg="Executing migration" id="create team table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.466459094Z level=info msg="Migration successfully executed" id="create team table" duration=1.109439ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.469716511Z level=info msg="Executing migration" id="add index team.org_id"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.471298464Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.587603ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.474356936Z level=info msg="Executing migration" id="add unique index team_org_id_name"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.475063685Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=706.499µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.47781731Z level=info msg="Executing migration" id="Add column uid in team"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.482040833Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.222494ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.484195131Z level=info msg="Executing migration" id="Update uid column values in team"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.484417447Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=223.466µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.486046721Z level=info msg="Executing migration" id="Add unique index team_org_id_uid"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.486842023Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=795.132µs
Dec  1 04:52:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Dec  1 04:52:14 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.490846071Z level=info msg="Executing migration" id="create team member table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.49155895Z level=info msg="Migration successfully executed" id="create team member table" duration=712.13µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.494953711Z level=info msg="Executing migration" id="add index team_member.org_id"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.495717752Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=763.281µs
Dec  1 04:52:14 np0005540825 podman[98299]: 2025-12-01 09:52:14.401560935 +0000 UTC m=+0.026705740 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.498166098Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.498904658Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=738.8µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.501487987Z level=info msg="Executing migration" id="add index team_member.team_id"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.502162015Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=674.098µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.504958301Z level=info msg="Executing migration" id="Add column email to team table"
Dec  1 04:52:14 np0005540825 podman[98299]: 2025-12-01 09:52:14.509580765 +0000 UTC m=+0.134725580 container init 376d401d1f98cc98b78d1ffa49d42f54490ee00d840a98f7c137a8a1111ee397 (image=quay.io/ceph/haproxy:2.3, name=infallible_herschel)
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.509521944Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.566573ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.511901188Z level=info msg="Executing migration" id="Add column external to team_member table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.515601657Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=3.699599ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.517695524Z level=info msg="Executing migration" id="Add column permission to team_member table"
Dec  1 04:52:14 np0005540825 podman[98299]: 2025-12-01 09:52:14.518326461 +0000 UTC m=+0.143471256 container start 376d401d1f98cc98b78d1ffa49d42f54490ee00d840a98f7c137a8a1111ee397 (image=quay.io/ceph/haproxy:2.3, name=infallible_herschel)
Dec  1 04:52:14 np0005540825 podman[98299]: 2025-12-01 09:52:14.521379743 +0000 UTC m=+0.146524528 container attach 376d401d1f98cc98b78d1ffa49d42f54490ee00d840a98f7c137a8a1111ee397 (image=quay.io/ceph/haproxy:2.3, name=infallible_herschel)
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.521420924Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=3.72093ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.523560262Z level=info msg="Executing migration" id="create dashboard acl table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.524416755Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=857.913µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.526626004Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id"
Dec  1 04:52:14 np0005540825 infallible_herschel[98315]: 0 0
Dec  1 04:52:14 np0005540825 systemd[1]: libpod-376d401d1f98cc98b78d1ffa49d42f54490ee00d840a98f7c137a8a1111ee397.scope: Deactivated successfully.
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.527402325Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=775.181µs
Dec  1 04:52:14 np0005540825 podman[98299]: 2025-12-01 09:52:14.527795506 +0000 UTC m=+0.152940291 container died 376d401d1f98cc98b78d1ffa49d42f54490ee00d840a98f7c137a8a1111ee397 (image=quay.io/ceph/haproxy:2.3, name=infallible_herschel)
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.529957914Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.53091698Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=959.016µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.533219972Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.534029014Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=809.072µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.536220233Z level=info msg="Executing migration" id="add index dashboard_acl_user_id"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.536928052Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=706.699µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.539244524Z level=info msg="Executing migration" id="add index dashboard_acl_team_id"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.539962654Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=718.83µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.542676737Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.543505679Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=829.502µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.549004287Z level=info msg="Executing migration" id="add index dashboard_permission"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.550190899Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.189482ms
Dec  1 04:52:14 np0005540825 systemd[1]: var-lib-containers-storage-overlay-88b933b4f4e2f45de7c1867d59eebbef57cc0405a14db80e6b8a63c41aa01dac-merged.mount: Deactivated successfully.
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.556351155Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.557071455Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=724.26µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.561978047Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.562262305Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=285.087µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.566108128Z level=info msg="Executing migration" id="create tag table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.567054324Z level=info msg="Migration successfully executed" id="create tag table" duration=944.206µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.569477349Z level=info msg="Executing migration" id="add index tag.key_value"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.570886287Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=928.625µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.575385358Z level=info msg="Executing migration" id="create login attempt table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.576390595Z level=info msg="Migration successfully executed" id="create login attempt table" duration=1.006577ms
Dec  1 04:52:14 np0005540825 podman[98299]: 2025-12-01 09:52:14.578162063 +0000 UTC m=+0.203306848 container remove 376d401d1f98cc98b78d1ffa49d42f54490ee00d840a98f7c137a8a1111ee397 (image=quay.io/ceph/haproxy:2.3, name=infallible_herschel)
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.579560961Z level=info msg="Executing migration" id="add index login_attempt.username"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.581186014Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.628494ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.58361465Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.584631067Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.016777ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.586900638Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.599038895Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=12.132507ms
Dec  1 04:52:14 np0005540825 systemd[1]: libpod-conmon-376d401d1f98cc98b78d1ffa49d42f54490ee00d840a98f7c137a8a1111ee397.scope: Deactivated successfully.
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.601851001Z level=info msg="Executing migration" id="create login_attempt v2"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.602737545Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=888.844µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.60516466Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.605961992Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=795.312µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.608786168Z level=info msg="Executing migration" id="copy login_attempt v1 to v2"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.609102306Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=315.988µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.610996167Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.611664105Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=667.508µs
Dec  1 04:52:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec  1 04:52:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.614879702Z level=info msg="Executing migration" id="create user auth table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.616065454Z level=info msg="Migration successfully executed" id="create user auth table" duration=1.186632ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.618423298Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1"
Dec  1 04:52:14 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.62036121Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.937713ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.623658139Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.624049849Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=393.001µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.626389002Z level=info msg="Executing migration" id="Add OAuth access token to user_auth"
Dec  1 04:52:14 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 86 pg[10.8( v 56'1015 (0'0,56'1015] local-lis/les=85/86 n=7 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=85) [0]/[1] async=[0] r=0 lpr=85 pi=[61,85)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:52:14 np0005540825 systemd[1]: Reloading.
Dec  1 04:52:14 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 86 pg[10.18( v 56'1015 (0'0,56'1015] local-lis/les=85/86 n=4 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=85) [0]/[1] async=[0] r=0 lpr=85 pi=[61,85)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.631436678Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=5.044686ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.633675638Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.638798396Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.121258ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.640493392Z level=info msg="Executing migration" id="Add OAuth token type to user_auth"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.645684982Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.18991ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.648185229Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.653100002Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=4.910713ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.655529867Z level=info msg="Executing migration" id="Add index to user_id column in user_auth"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.65673486Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.206323ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.659414792Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.663457771Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=4.041459ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.66567093Z level=info msg="Executing migration" id="create server_lock table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.666606336Z level=info msg="Migration successfully executed" id="create server_lock table" duration=936.166µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.66936169Z level=info msg="Executing migration" id="add index server_lock.operation_uid"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.670389148Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.028497ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.672545176Z level=info msg="Executing migration" id="create user auth token table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.673469681Z level=info msg="Migration successfully executed" id="create user auth token table" duration=924.454µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.675886056Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.677018716Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.13186ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.679650607Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.681759654Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=2.108517ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.685000851Z level=info msg="Executing migration" id="add index user_auth_token.user_id"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.686334827Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.335536ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.688706821Z level=info msg="Executing migration" id="Add revoked_at to the user auth token"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.694831146Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=6.118385ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.697344264Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.698823974Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.519941ms
Dec  1 04:52:14 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.702103122Z level=info msg="Executing migration" id="create cache_data table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.703286764Z level=info msg="Migration successfully executed" id="create cache_data table" duration=1.185902ms
Dec  1 04:52:14 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.705937745Z level=info msg="Executing migration" id="add unique index cache_data.cache_key"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.707057885Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.12031ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.711987948Z level=info msg="Executing migration" id="create short_url table v1"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.71315408Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.166192ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.715786061Z level=info msg="Executing migration" id="add index short_url.org_id-uid"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.716666014Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=879.164µs
Dec  1 04:52:14 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 86 pg[10.19( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=69/69 les/c/f=70/70/0 sis=86) [1] r=0 lpr=86 pi=[69,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.719715056Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.719872911Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=161.985µs
Dec  1 04:52:14 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 86 pg[10.9( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=69/69 les/c/f=70/70/0 sis=86) [1] r=0 lpr=86 pi=[69,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.722756088Z level=info msg="Executing migration" id="delete alert_definition table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.722895102Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=139.864µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.725472852Z level=info msg="Executing migration" id="recreate alert_definition table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.726319714Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=846.133µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.728816692Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.729797318Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=981.576µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.732621074Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.733797816Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.178502ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.737453534Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.737516656Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=63.462µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.739102919Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.740154677Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.051388ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.742440999Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.743298572Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=858.333µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.745004148Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.745763858Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=759.78µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.7476986Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.748489192Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=789.642µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.750571018Z level=info msg="Executing migration" id="Add column paused in alert_definition"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.754904834Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=4.332776ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.757147935Z level=info msg="Executing migration" id="drop alert_definition table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.758362158Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.213483ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.760229318Z level=info msg="Executing migration" id="delete alert_definition_version table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.760335051Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=106.123µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.762733925Z level=info msg="Executing migration" id="recreate alert_definition_version table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.763614579Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=880.334µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.765845309Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.76660896Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=768.391µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.768532042Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.769260111Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=727.929µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.770803943Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.770899845Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=95.702µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.772743405Z level=info msg="Executing migration" id="drop alert_definition_version table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.773750352Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.006727ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.775803277Z level=info msg="Executing migration" id="create alert_instance table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.776752953Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=949.336µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.778630244Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.779582389Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=951.165µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.781706226Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.782626191Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=919.405µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.784739748Z level=info msg="Executing migration" id="add column current_state_end to alert_instance"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.79038319Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=5.642692ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.792181619Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.793085153Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=903.494µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.795284372Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.796194577Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=910.565µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.798288863Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.821343204Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=23.050371ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.823684187Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance"
Dec  1 04:52:14 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Dec  1 04:52:14 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.843251145Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=19.564937ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.84494209Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.84568963Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=747.29µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.84754999Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.848251079Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=702.349µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.850428178Z level=info msg="Executing migration" id="add current_reason column related to current_state"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.854255521Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=3.826203ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.855859634Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.859673877Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=3.813833ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.862640407Z level=info msg="Executing migration" id="create alert_rule table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.863421368Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=781.511µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.867504178Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.868258918Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=752.8µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.871079834Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.871893296Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=813.492µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.874736043Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.875799001Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.062728ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.881547126Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.881598368Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=52.102µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.884507886Z level=info msg="Executing migration" id="add column for to alert_rule"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.889397668Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=4.888982ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.891204666Z level=info msg="Executing migration" id="add column annotations to alert_rule"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.895260756Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=4.05603ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.897088485Z level=info msg="Executing migration" id="add column labels to alert_rule"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.901414971Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=4.327356ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.903690843Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.904679629Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=988.896µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.906582101Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.907398193Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=816.602µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.909492669Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.915072299Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=5.57676ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.917374181Z level=info msg="Executing migration" id="add panel_id column to alert_rule"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.923552868Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=6.179467ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.927007821Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.929395555Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=2.384604ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.932519019Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule"
Dec  1 04:52:14 np0005540825 systemd[1]: Reloading.
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.93700968Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=4.483431ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.939107357Z level=info msg="Executing migration" id="add is_paused column to alert_rule table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.94441673Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=5.308543ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.947597556Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.947726429Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=133.373µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.95000294Z level=info msg="Executing migration" id="create alert_rule_version table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.951433129Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.435939ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.955816217Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.956851115Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.035938ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.959239729Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.960120803Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=881.074µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.962647161Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.962693432Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=46.661µs
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.964500041Z level=info msg="Executing migration" id="add column for to alert_rule_version"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.968732905Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=4.232184ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.970612476Z level=info msg="Executing migration" id="add column annotations to alert_rule_version"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.974776008Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=4.162792ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.976191776Z level=info msg="Executing migration" id="add column labels to alert_rule_version"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.98042166Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=4.227494ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.982352042Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.986536235Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=4.185103ms
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:14 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad08003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.993763949Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table"
Dec  1 04:52:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:14.999482333Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=5.717714ms
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.00232694Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table"
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.002392042Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=66.732µs
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.004211491Z level=info msg="Executing migration" id=create_alert_configuration_table
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.004898389Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=687.338µs
Dec  1 04:52:15 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.007726716Z level=info msg="Executing migration" id="Add column default in alert_configuration"
Dec  1 04:52:15 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.012316459Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=4.571993ms
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.01457229Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql"
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.014623561Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=52.431µs
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.016585334Z level=info msg="Executing migration" id="add column org_id in alert_configuration"
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.020985793Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=4.397039ms
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.023292445Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column"
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.024371184Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.078669ms
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.026879252Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration"
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.031226219Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=4.346107ms
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.033404557Z level=info msg="Executing migration" id=create_ngalert_configuration_table
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.034196479Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=791.652µs
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.037009434Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column"
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.037793876Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=781.812µs
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.040375385Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration"
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.045201625Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=4.82462ms
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.047270801Z level=info msg="Executing migration" id="create provenance_type table"
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.047934899Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=664.088µs
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.050649252Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns"
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.051648199Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=999.337µs
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.054456585Z level=info msg="Executing migration" id="create alert_image table"
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.055174814Z level=info msg="Migration successfully executed" id="create alert_image table" duration=718.6µs
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.057540338Z level=info msg="Executing migration" id="add unique index on token to alert_image table"
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.058352619Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=812.721µs
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.060427735Z level=info msg="Executing migration" id="support longer URLs in alert_image table"
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.060474537Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=47.562µs
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.062553443Z level=info msg="Executing migration" id=create_alert_configuration_history_table
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.063326603Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=772.74µs
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.065654006Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration"
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.066445408Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=791.241µs
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.068515623Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists"
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.068791921Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists"
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.070429975Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table"
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.070797675Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=367.41µs
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.072335246Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration"
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.073067196Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=732.54µs
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.076277592Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history"
Dec  1 04:52:15 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:15 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:15 np0005540825 ceph-mon[74416]: Deploying daemon haproxy.rgw.default.compute-0.owswdq on compute-0
Dec  1 04:52:15 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.080934048Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=4.654806ms
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.100392852Z level=info msg="Executing migration" id="create library_element table v1"
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.10143585Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.043788ms
Dec  1 04:52:15 np0005540825 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.owswdq for 365f19c2-81e5-5edd-b6b4-280555214d3a...
Dec  1 04:52:15 np0005540825 systemd-logind[789]: New session 37 of user zuul.
Dec  1 04:52:15 np0005540825 systemd[1]: Started Session 37 of User zuul.
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.364671982Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind"
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.367261472Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=2.588009ms
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.415325056Z level=info msg="Executing migration" id="create library_element_connection table v1"
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.416708164Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.415779ms
Dec  1 04:52:15 np0005540825 podman[98516]: 2025-12-01 09:52:15.462643521 +0000 UTC m=+0.026121535 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.578369499Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id"
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.580842836Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=2.475966ms
Dec  1 04:52:15 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v104: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec  1 04:52:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Dec  1 04:52:15 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec  1 04:52:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:15 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.805054326Z level=info msg="Executing migration" id="add unique index library_element org_id_uid"
Dec  1 04:52:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:15.806436923Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.384557ms
Dec  1 04:52:15 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Dec  1 04:52:15 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Dec  1 04:52:16 np0005540825 ceph-mgr[74709]: [progress INFO root] Writing back 27 completed events
Dec  1 04:52:16 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  1 04:52:16 np0005540825 podman[98516]: 2025-12-01 09:52:16.169588087 +0000 UTC m=+0.733066081 container create 0059e7ccf7457fec64736cc54703d3f986dd002d380ad03e1091edc4f6004f36 (image=quay.io/ceph/haproxy:2.3, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-rgw-default-compute-0-owswdq)
Dec  1 04:52:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:16 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad34003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:16.609962431Z level=info msg="Executing migration" id="increase max description length to 2048"
Dec  1 04:52:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:16.610010062Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=54.242µs
Dec  1 04:52:16 np0005540825 python3.9[98626]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:52:16 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Dec  1 04:52:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:16 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:17 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec  1 04:52:17 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Dec  1 04:52:17 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v105: 353 pgs: 2 unknown, 2 active+remapped, 349 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec  1 04:52:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:17 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad08003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.059149551Z level=info msg="Executing migration" id="alter library_element model to mediumtext"
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.059274165Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=128.194µs
Dec  1 04:52:18 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Dec  1 04:52:18 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 11.1e deep-scrub starts
Dec  1 04:52:18 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d84f9eb9776f04bd2d86ce6e7dd0d35f780af8d5540b1452d932ca713a933c43/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Dec  1 04:52:18 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.161091928Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting"
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.16152598Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=436.721µs
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.227663251Z level=info msg="Executing migration" id="create data_keys table"
Dec  1 04:52:18 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 87 pg[10.19( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=69/69 les/c/f=70/70/0 sis=87) [1]/[2] r=-1 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:18 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 87 pg[10.9( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=69/69 les/c/f=70/70/0 sis=87) [1]/[2] r=-1 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:18 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 87 pg[10.9( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=69/69 les/c/f=70/70/0 sis=87) [1]/[2] r=-1 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  1 04:52:18 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 87 pg[10.19( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=69/69 les/c/f=70/70/0 sis=87) [1]/[2] r=-1 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.229080229Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.419698ms
Dec  1 04:52:18 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 87 pg[10.8( v 56'1015 (0'0,56'1015] local-lis/les=85/86 n=7 ec=61/50 lis/c=85/61 les/c/f=86/62/0 sis=87 pruub=12.399388313s) [0] async=[0] r=-1 lpr=87 pi=[61,87)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 243.069503784s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:18 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 87 pg[10.8( v 56'1015 (0'0,56'1015] local-lis/les=85/86 n=7 ec=61/50 lis/c=85/61 les/c/f=86/62/0 sis=87 pruub=12.399317741s) [0] r=-1 lpr=87 pi=[61,87)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 243.069503784s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:52:18 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 87 pg[10.18( v 56'1015 (0'0,56'1015] local-lis/les=85/86 n=4 ec=61/50 lis/c=85/61 les/c/f=86/62/0 sis=87 pruub=12.400777817s) [0] async=[0] r=-1 lpr=87 pi=[61,87)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 243.071411133s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:18 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 87 pg[10.18( v 56'1015 (0'0,56'1015] local-lis/les=85/86 n=4 ec=61/50 lis/c=85/61 les/c/f=86/62/0 sis=87 pruub=12.400716782s) [0] r=-1 lpr=87 pi=[61,87)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 243.071411133s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:52:18 np0005540825 podman[98516]: 2025-12-01 09:52:18.230716884 +0000 UTC m=+2.794194898 container init 0059e7ccf7457fec64736cc54703d3f986dd002d380ad03e1091edc4f6004f36 (image=quay.io/ceph/haproxy:2.3, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-rgw-default-compute-0-owswdq)
Dec  1 04:52:18 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 11.1e deep-scrub ok
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:18 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:18 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 87 pg[10.a( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=71/71 les/c/f=72/72/0 sis=87) [1] r=0 lpr=87 pi=[71,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:52:18 np0005540825 podman[98516]: 2025-12-01 09:52:18.236669834 +0000 UTC m=+2.800147818 container start 0059e7ccf7457fec64736cc54703d3f986dd002d380ad03e1091edc4f6004f36 (image=quay.io/ceph/haproxy:2.3, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-rgw-default-compute-0-owswdq)
Dec  1 04:52:18 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 87 pg[10.1a( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=71/71 les/c/f=72/72/0 sis=87) [1] r=0 lpr=87 pi=[71,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-rgw-default-compute-0-owswdq[98798]: [NOTICE] 334/095218 (2) : New worker #1 (4) forked
Dec  1 04:52:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:52:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.002000053s ======
Dec  1 04:52:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:52:18.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.272748606Z level=info msg="Executing migration" id="create secrets table"
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.273747463Z level=info msg="Migration successfully executed" id="create secrets table" duration=1.000887ms
Dec  1 04:52:18 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec  1 04:52:18 np0005540825 bash[98516]: 0059e7ccf7457fec64736cc54703d3f986dd002d380ad03e1091edc4f6004f36
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.277436472Z level=info msg="Executing migration" id="rename data_keys name column to id"
Dec  1 04:52:18 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:18 np0005540825 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.owswdq for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 04:52:18 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Dec  1 04:52:18 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Dec  1 04:52:18 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Dec  1 04:52:18 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 88 pg[10.a( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=71/71 les/c/f=72/72/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[71,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:18 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 88 pg[10.a( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=71/71 les/c/f=72/72/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[71,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  1 04:52:18 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 88 pg[10.1a( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=71/71 les/c/f=72/72/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[71,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:18 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 88 pg[10.1a( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=71/71 les/c/f=72/72/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[71,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.309590358Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=32.148656ms
Dec  1 04:52:18 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.39466375Z level=info msg="Executing migration" id="add name column into data_keys"
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.401648278Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=6.987568ms
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.404401353Z level=info msg="Executing migration" id="copy data_keys id column values into name"
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.404577057Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=176.834µs
Dec  1 04:52:18 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:18 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.408320128Z level=info msg="Executing migration" id="rename data_keys name column to label"
Dec  1 04:52:18 np0005540825 python3.9[98849]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.438028449Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=29.7023ms
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.445736166Z level=info msg="Executing migration" id="rename data_keys id column back to name"
Dec  1 04:52:18 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:18 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec  1 04:52:18 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.475349544Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=29.596507ms
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.523348617Z level=info msg="Executing migration" id="create kv_store table v1"
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.524666413Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=1.320206ms
Dec  1 04:52:18 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:18 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.zubkfi on compute-2
Dec  1 04:52:18 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.zubkfi on compute-2
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.530436438Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key"
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.532216396Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.779378ms
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.535147915Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations"
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.535474784Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=326.839µs
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.537752025Z level=info msg="Executing migration" id="create permission table"
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.539189314Z level=info msg="Migration successfully executed" id="create permission table" duration=1.436849ms
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.555908214Z level=info msg="Executing migration" id="add unique index permission.role_id"
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.556902641Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=995.167µs
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.561647729Z level=info msg="Executing migration" id="add unique index role_id_action_scope"
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.562966844Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.318855ms
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.592806548Z level=info msg="Executing migration" id="create role table"
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.594333499Z level=info msg="Migration successfully executed" id="create role table" duration=1.529261ms
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.646462364Z level=info msg="Executing migration" id="add column display_name"
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.652483156Z level=info msg="Migration successfully executed" id="add column display_name" duration=6.025262ms
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.654351426Z level=info msg="Executing migration" id="add column group_name"
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.660233695Z level=info msg="Migration successfully executed" id="add column group_name" duration=5.878219ms
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.662232899Z level=info msg="Executing migration" id="add index role.org_id"
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.663261256Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.029077ms
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.760385243Z level=info msg="Executing migration" id="add unique index role_org_id_name"
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.762934852Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=2.551919ms
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.767392522Z level=info msg="Executing migration" id="add index role_org_id_uid"
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.768918763Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.528691ms
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.805520969Z level=info msg="Executing migration" id="create team role table"
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.806932927Z level=info msg="Migration successfully executed" id="create team role table" duration=1.415988ms
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.848660701Z level=info msg="Executing migration" id="add index team_role.org_id"
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.850585423Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.927932ms
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.853922413Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id"
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.855442994Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.520361ms
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.874293312Z level=info msg="Executing migration" id="add index team_role.team_id"
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.875892565Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.602533ms
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.918233206Z level=info msg="Executing migration" id="create user role table"
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.919847179Z level=info msg="Migration successfully executed" id="create user role table" duration=1.616304ms
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.923150918Z level=info msg="Executing migration" id="add index user_role.org_id"
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.925370918Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=2.22464ms
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.967841642Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id"
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.969649001Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.812829ms
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.974472771Z level=info msg="Executing migration" id="add index user_role.user_id"
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.97555628Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.083259ms
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.981225213Z level=info msg="Executing migration" id="create builtin role table"
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.982940139Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.720957ms
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.985390675Z level=info msg="Executing migration" id="add index builtin_role.role_id"
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.986885165Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.4937ms
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.98966189Z level=info msg="Executing migration" id="add index builtin_role.name"
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:18.991060978Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.400657ms
Dec  1 04:52:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:18 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad34003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.00265924Z level=info msg="Executing migration" id="Add column org_id to builtin_role table"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.011632662Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=8.975742ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.013732328Z level=info msg="Executing migration" id="add index builtin_role.org_id"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.014945441Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.214493ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.042133303Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.043920192Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.791629ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.068050712Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.069697656Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.652945ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.094116564Z level=info msg="Executing migration" id="add unique index role.uid"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.09582925Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.724546ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.097904826Z level=info msg="Executing migration" id="create seed assignment table"
Dec  1 04:52:19 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 11.1a deep-scrub starts
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.099046167Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=1.140701ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.103277181Z level=info msg="Executing migration" id="add unique index builtin_role_role_name"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.10473346Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.455949ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.107127084Z level=info msg="Executing migration" id="add column hidden to role table"
Dec  1 04:52:19 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 11.1a deep-scrub ok
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.113553718Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=6.424163ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.116132607Z level=info msg="Executing migration" id="permission kind migration"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.126713762Z level=info msg="Migration successfully executed" id="permission kind migration" duration=10.578805ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.128601383Z level=info msg="Executing migration" id="permission attribute migration"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.134386449Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=5.784726ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.136202188Z level=info msg="Executing migration" id="permission identifier migration"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.141639974Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=5.436646ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.143502624Z level=info msg="Executing migration" id="add permission identifier index"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.14446897Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=966.136µs
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.152007744Z level=info msg="Executing migration" id="add permission action scope role_id index"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.153441532Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.430219ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.156356001Z level=info msg="Executing migration" id="remove permission role_id action scope index"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.157465291Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.10905ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.159486085Z level=info msg="Executing migration" id="create query_history table v1"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.160218055Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=731.69µs
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.162226599Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.163260217Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.032088ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.167061089Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.167172942Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=112.793µs
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.169358621Z level=info msg="Executing migration" id="rbac disabled migrator"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.169395772Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=37.641µs
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.294865482Z level=info msg="Executing migration" id="teams permissions migration"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.295490499Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=628.817µs
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.334651294Z level=info msg="Executing migration" id="dashboard permissions"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.336040131Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=1.392497ms
Dec  1 04:52:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.341478648Z level=info msg="Executing migration" id="dashboard permissions uid scopes"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.342131046Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=652.137µs
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.345095195Z level=info msg="Executing migration" id="drop managed folder create actions"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.345285581Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=192.896µs
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.353519632Z level=info msg="Executing migration" id="alerting notification permissions"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.354138349Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=620.857µs
Dec  1 04:52:19 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec  1 04:52:19 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:19 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:19 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:19 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:19 np0005540825 ceph-mon[74416]: Deploying daemon haproxy.rgw.default.compute-2.zubkfi on compute-2
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.363477491Z level=info msg="Executing migration" id="create query_history_star table v1"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.364873198Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.397158ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.377227731Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.379083981Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.85828ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.40095767Z level=info msg="Executing migration" id="add column org_id in query_history_star"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.409017608Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=8.058787ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.411623288Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.41172443Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=102.962µs
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.414193617Z level=info msg="Executing migration" id="create correlation table v1"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.4161455Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.951122ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.419224023Z level=info msg="Executing migration" id="add index correlations.uid"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.421158275Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.934873ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.423847677Z level=info msg="Executing migration" id="add index correlations.source_uid"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.425696667Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.84871ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.428605635Z level=info msg="Executing migration" id="add correlation config column"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.442547521Z level=info msg="Migration successfully executed" id="add correlation config column" duration=13.937496ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.486599318Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1"
Dec  1 04:52:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.489434694Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=2.836687ms
Dec  1 04:52:19 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.507687656Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.509929976Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=2.24404ms
Dec  1 04:52:19 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 89 pg[10.9( v 56'1015 (0'0,56'1015] local-lis/les=0/0 n=6 ec=61/50 lis/c=87/69 les/c/f=88/70/0 sis=89) [1] r=0 lpr=89 pi=[69,89)/1 luod=0'0 crt=56'1015 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:19 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 89 pg[10.9( v 56'1015 (0'0,56'1015] local-lis/les=0/0 n=6 ec=61/50 lis/c=87/69 les/c/f=88/70/0 sis=89) [1] r=0 lpr=89 pi=[69,89)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:52:19 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 89 pg[10.19( v 56'1015 (0'0,56'1015] local-lis/les=0/0 n=7 ec=61/50 lis/c=87/69 les/c/f=88/70/0 sis=89) [1] r=0 lpr=89 pi=[69,89)/1 luod=0'0 crt=56'1015 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:19 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 89 pg[10.19( v 56'1015 (0'0,56'1015] local-lis/les=0/0 n=7 ec=61/50 lis/c=87/69 les/c/f=88/70/0 sis=89) [1] r=0 lpr=89 pi=[69,89)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.515454335Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.537077038Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=21.622772ms
Dec  1 04:52:19 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_09:52:19
Dec  1 04:52:19 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 04:52:19 np0005540825 ceph-mgr[74709]: [balancer INFO root] Some PGs (0.005666) are unknown; try again later
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.556531032Z level=info msg="Executing migration" id="create correlation v2"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.557699083Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.168881ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.559930603Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.560738165Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=805.642µs
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.564420104Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.565254977Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=835.003µs
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.570526389Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.571670349Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.14577ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.574575878Z level=info msg="Executing migration" id="copy correlation v1 to v2"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.574803794Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=226.706µs
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.57689162Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.577668941Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=777.511µs
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.588483982Z level=info msg="Executing migration" id="add provisioning column"
Dec  1 04:52:19 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v109: 353 pgs: 2 unknown, 2 active+remapped, 349 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.596423846Z level=info msg="Migration successfully executed" id="add provisioning column" duration=7.936614ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.603862787Z level=info msg="Executing migration" id="create entity_events table"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.604915155Z level=info msg="Migration successfully executed" id="create entity_events table" duration=1.052828ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.607465644Z level=info msg="Executing migration" id="create dashboard public config v1"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.608536223Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.072489ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.623018893Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.623519146Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1"
Dec  1 04:52:19 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:52:19 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.626360933Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.626853126Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.628966923Z level=info msg="Executing migration" id="Drop old dashboard public config table"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.62998369Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.014547ms
Dec  1 04:52:19 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:52:19 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:19 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:19 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:52:19 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.654895332Z level=info msg="Executing migration" id="recreate dashboard public config v1"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.656013592Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.121281ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.67154659Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.672735402Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.191882ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.698363983Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.699487373Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.12522ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.703180352Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.705201997Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=2.027615ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.708008823Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.708967088Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=958.056µs
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.750091986Z level=info msg="Executing migration" id="Drop public config table"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.751835083Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.746237ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.769498729Z level=info msg="Executing migration" id="Recreate dashboard public config v2"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.7710233Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.526341ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.776869408Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.778129812Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.257963ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.780607868Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.781943204Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.335856ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.78400726Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.785583392Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.573192ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.78845623Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.818701855Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=30.237154ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.820862333Z level=info msg="Executing migration" id="add annotations_enabled column"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.829990829Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=9.127726ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.832140567Z level=info msg="Executing migration" id="add time_selection_enabled column"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.840458371Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=8.312224ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.843149713Z level=info msg="Executing migration" id="delete orphaned public dashboards"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.84340556Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=255.997µs
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.845276921Z level=info msg="Executing migration" id="add share column"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.853701798Z level=info msg="Migration successfully executed" id="add share column" duration=8.424126ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.855747483Z level=info msg="Executing migration" id="backfill empty share column fields with default of public"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.855965839Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=218.356µs
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.857695995Z level=info msg="Executing migration" id="create file table"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.858726123Z level=info msg="Migration successfully executed" id="create file table" duration=1.030148ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.862187146Z level=info msg="Executing migration" id="file table idx: path natural pk"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.863542073Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.354897ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.866281616Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.867595332Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.313646ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.872691959Z level=info msg="Executing migration" id="create file_meta table"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.874250741Z level=info msg="Migration successfully executed" id="create file_meta table" duration=1.558992ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.877609682Z level=info msg="Executing migration" id="file table idx: path key"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.878830955Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.221213ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.88385797Z level=info msg="Executing migration" id="set path collation in file table"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.883921212Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=64.212µs
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.885940216Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.886032769Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=100.053µs
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.888975148Z level=info msg="Executing migration" id="managed permissions migration"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.889615155Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=639.577µs
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.892290667Z level=info msg="Executing migration" id="managed folder permissions alert actions migration"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.892668707Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=378.86µs
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.895333029Z level=info msg="Executing migration" id="RBAC action name migrator"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.896601303Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.266934ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.898559066Z level=info msg="Executing migration" id="Add UID column to playlist"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.90539019Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=6.830264ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.907488827Z level=info msg="Executing migration" id="Update uid column values in playlist"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.907636841Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=148.834µs
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.910118707Z level=info msg="Executing migration" id="Add index for uid in playlist"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.911082853Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=979.906µs
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.913813477Z level=info msg="Executing migration" id="update group index for alert rules"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.914185057Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=373.48µs
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.915976125Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.91615754Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=182.395µs
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.917980869Z level=info msg="Executing migration" id="admin only folder/dashboard permission"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.918444412Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=463.803µs
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.92061909Z level=info msg="Executing migration" id="add action column to seed_assignment"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.928335318Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=7.713018ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.930743463Z level=info msg="Executing migration" id="add scope column to seed_assignment"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.93844388Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=7.697667ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.940412344Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update"
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.941557784Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.145881ms
Dec  1 04:52:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:19.943447065Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable"
Dec  1 04:52:19 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 04:52:19 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 04:52:19 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 04:52:19 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 04:52:19 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 04:52:20 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 04:52:20 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 04:52:20 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 04:52:20 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 04:52:20 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.029250637Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=85.796782ms
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.033356017Z level=info msg="Executing migration" id="add unique index builtin_role_name back"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.034467037Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.1116ms
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.036496222Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.037361885Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=865.273µs
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.039472972Z level=info msg="Executing migration" id="add primary key to seed_assigment"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.061786133Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=22.308871ms
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.065178335Z level=info msg="Executing migration" id="add origin column to seed_assignment"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.073035536Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=7.852541ms
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.075707568Z level=info msg="Executing migration" id="add origin to plugin seed_assignment"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.076062038Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=356.79µs
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.077944879Z level=info msg="Executing migration" id="prevent seeding OnCall access"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.078150234Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=206.355µs
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.08059898Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.080823766Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=225.626µs
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.083108758Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.083337174Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=228.286µs
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.085180554Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.08542084Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=240.696µs
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.087805724Z level=info msg="Executing migration" id="create folder table"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.088734769Z level=info msg="Migration successfully executed" id="create folder table" duration=929.135µs
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.090639381Z level=info msg="Executing migration" id="Add index for parent_uid"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.093022475Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=2.379774ms
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.096025036Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.097050263Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.027897ms
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.099697665Z level=info msg="Executing migration" id="Update folder title length"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.099724845Z level=info msg="Migration successfully executed" id="Update folder title length" duration=27.95µs
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.101690498Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.102605633Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=914.985µs
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.106000955Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.107021742Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.020387ms
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.10955401Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.110553987Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.001787ms
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.112926611Z level=info msg="Executing migration" id="Sync dashboard and folder table"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.113365403Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=438.872µs
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.115605313Z level=info msg="Executing migration" id="Remove ghost folders from the folder table"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.115906391Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=304.128µs
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.118850321Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.119816387Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=965.916µs
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.121556034Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.122580921Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.025697ms
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.124536124Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.125665564Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.12933ms
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.128027118Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.129264641Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.251814ms
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.131764599Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.132757415Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=993.976µs
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.134532033Z level=info msg="Executing migration" id="create anon_device table"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.135471709Z level=info msg="Migration successfully executed" id="create anon_device table" duration=938.885µs
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.13774072Z level=info msg="Executing migration" id="add unique index anon_device.device_id"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.139273031Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.532491ms
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.142250291Z level=info msg="Executing migration" id="add index anon_device.updated_at"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.143405322Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.155891ms
Dec  1 04:52:20 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.147071851Z level=info msg="Executing migration" id="create signing_key table"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.147997526Z level=info msg="Migration successfully executed" id="create signing_key table" duration=925.745µs
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.150986766Z level=info msg="Executing migration" id="add unique index signing_key.key_id"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.151942522Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=957.206µs
Dec  1 04:52:20 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.154668626Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.155671373Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.002577ms
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.157946374Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.158209491Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=264.387µs
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.160638657Z level=info msg="Executing migration" id="Add folder_uid for dashboard"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.168379735Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=7.733289ms
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.170829091Z level=info msg="Executing migration" id="Populate dashboard folder_uid column"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.171624822Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=796.841µs
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.173253956Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.174452999Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.198363ms
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.177155632Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.178490417Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.334616ms
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.180930143Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.182712681Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=1.782538ms
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.185152327Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.186477763Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.325806ms
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.188762364Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.189855684Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.09327ms
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.191912409Z level=info msg="Executing migration" id="create sso_setting table"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.192910636Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=997.887µs
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.195597898Z level=info msg="Executing migration" id="copy kvstore migration status to each org"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.196358629Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=761.621µs
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.198560898Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.198797645Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=237.097µs
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.201031665Z level=info msg="Executing migration" id="alter kv_store.value to longtext"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.201083356Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=51.871µs
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.202937026Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.209685938Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=6.746292ms
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.212144534Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.218637219Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=6.492485ms
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.220686114Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.220991962Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=306.578µs
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=migrator t=2025-12-01T09:52:20.223659964Z level=info msg="migrations completed" performed=547 skipped=0 duration=6.641637007s
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=sqlstore t=2025-12-01T09:52:20.22496475Z level=info msg="Created default organization"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=secrets t=2025-12-01T09:52:20.227258251Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:20 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad08003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=plugin.store t=2025-12-01T09:52:20.251243767Z level=info msg="Loading plugins..."
Dec  1 04:52:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:52:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:52:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:52:20.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=local.finder t=2025-12-01T09:52:20.324262105Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=plugin.store t=2025-12-01T09:52:20.324426029Z level=info msg="Plugins loaded" count=55 duration=73.183392ms
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=query_data t=2025-12-01T09:52:20.327004409Z level=info msg="Query Service initialization"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=live.push_http t=2025-12-01T09:52:20.329843755Z level=info msg="Live Push Gateway initialization"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=ngalert.migration t=2025-12-01T09:52:20.333888054Z level=info msg=Starting
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=ngalert.migration t=2025-12-01T09:52:20.334221543Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=ngalert.migration orgID=1 t=2025-12-01T09:52:20.334582733Z level=info msg="Migrating alerts for organisation"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=ngalert.migration orgID=1 t=2025-12-01T09:52:20.335146648Z level=info msg="Alerts found to migrate" alerts=0
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=ngalert.migration t=2025-12-01T09:52:20.336656909Z level=info msg="Completed alerting migration"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=ngalert.state.manager t=2025-12-01T09:52:20.353986915Z level=info msg="Running in alternative execution of Error/NoData mode"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=infra.usagestats.collector t=2025-12-01T09:52:20.355702062Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=provisioning.datasources t=2025-12-01T09:52:20.356715729Z level=info msg="inserting datasource from configuration" name=Loki uid=P8E80F9AEF21F6940
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=provisioning.alerting t=2025-12-01T09:52:20.366377779Z level=info msg="starting to provision alerting"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=provisioning.alerting t=2025-12-01T09:52:20.36639931Z level=info msg="finished to provision alerting"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=grafanaStorageLogger t=2025-12-01T09:52:20.366555664Z level=info msg="Storage starting"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=ngalert.state.manager t=2025-12-01T09:52:20.367950932Z level=info msg="Warming state cache for startup"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=http.server t=2025-12-01T09:52:20.369811662Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=http.server t=2025-12-01T09:52:20.370146261Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=ngalert.multiorg.alertmanager t=2025-12-01T09:52:20.38607751Z level=info msg="Starting MultiOrg Alertmanager"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=provisioning.dashboard t=2025-12-01T09:52:20.396335066Z level=info msg="starting to provision dashboards"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=ngalert.state.manager t=2025-12-01T09:52:20.429556661Z level=info msg="State cache has been initialized" states=0 duration=61.603979ms
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=ngalert.scheduler t=2025-12-01T09:52:20.429603083Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=ticker t=2025-12-01T09:52:20.429654294Z level=info msg=starting first_tick=2025-12-01T09:52:30Z
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=sqlstore.transactions t=2025-12-01T09:52:20.444685779Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=sqlstore.transactions t=2025-12-01T09:52:20.455256424Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked"
Dec  1 04:52:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Dec  1 04:52:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Dec  1 04:52:20 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Dec  1 04:52:20 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 90 pg[10.a( v 56'1015 (0'0,56'1015] local-lis/les=0/0 n=9 ec=61/50 lis/c=88/71 les/c/f=89/72/0 sis=90) [1] r=0 lpr=90 pi=[71,90)/1 luod=0'0 crt=56'1015 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:20 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 90 pg[10.a( v 56'1015 (0'0,56'1015] local-lis/les=0/0 n=9 ec=61/50 lis/c=88/71 les/c/f=89/72/0 sis=90) [1] r=0 lpr=90 pi=[71,90)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:52:20 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 90 pg[10.1a( v 56'1015 (0'0,56'1015] local-lis/les=0/0 n=4 ec=61/50 lis/c=88/71 les/c/f=89/72/0 sis=90) [1] r=0 lpr=90 pi=[71,90)/1 luod=0'0 crt=56'1015 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:20 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 90 pg[10.1a( v 56'1015 (0'0,56'1015] local-lis/les=0/0 n=4 ec=61/50 lis/c=88/71 les/c/f=89/72/0 sis=90) [1] r=0 lpr=90 pi=[71,90)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:52:20 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 90 pg[10.9( v 56'1015 (0'0,56'1015] local-lis/les=89/90 n=6 ec=61/50 lis/c=87/69 les/c/f=88/70/0 sis=89) [1] r=0 lpr=89 pi=[69,89)/1 crt=56'1015 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:52:20 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 90 pg[10.19( v 56'1015 (0'0,56'1015] local-lis/les=89/90 n=7 ec=61/50 lis/c=87/69 les/c/f=88/70/0 sis=89) [1] r=0 lpr=89 pi=[69,89)/1 crt=56'1015 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=plugins.update.checker t=2025-12-01T09:52:20.655583521Z level=info msg="Update check succeeded" duration=288.812401ms
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=grafana.update.checker t=2025-12-01T09:52:20.664152821Z level=info msg="Update check succeeded" duration=296.420455ms
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=provisioning.dashboard t=2025-12-01T09:52:20.712185346Z level=info msg="finished to provision dashboards"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=grafana-apiserver t=2025-12-01T09:52:20.743065097Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=grafana-apiserver t=2025-12-01T09:52:20.743672794Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Dec  1 04:52:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:20 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:21 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Dec  1 04:52:21 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Dec  1 04:52:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:52:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:52:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:52:21.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:52:21 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  1 04:52:21 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:21 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  1 04:52:21 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:21 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec  1 04:52:21 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:21 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0)
Dec  1 04:52:21 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:21 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  1 04:52:21 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  1 04:52:21 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  1 04:52:21 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  1 04:52:21 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.jnboao on compute-0
Dec  1 04:52:21 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.jnboao on compute-0
Dec  1 04:52:21 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Dec  1 04:52:21 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:21 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:21 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:21 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:21 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Dec  1 04:52:21 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Dec  1 04:52:21 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v112: 353 pgs: 2 peering, 351 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 19 op/s; 254 B/s, 10 objects/s recovering
Dec  1 04:52:21 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 91 pg[10.a( v 56'1015 (0'0,56'1015] local-lis/les=90/91 n=9 ec=61/50 lis/c=88/71 les/c/f=89/72/0 sis=90) [1] r=0 lpr=90 pi=[71,90)/1 crt=56'1015 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:52:21 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 91 pg[10.1a( v 56'1015 (0'0,56'1015] local-lis/les=90/91 n=4 ec=61/50 lis/c=88/71 les/c/f=89/72/0 sis=90) [1] r=0 lpr=90 pi=[71,90)/1 crt=56'1015 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:52:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:21 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad10001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:21 np0005540825 podman[98977]: 2025-12-01 09:52:21.921675389 +0000 UTC m=+0.046430462 container create 2e4e7f50b94c14f15c216532328d66e6fa1a2ec512e002ddd78b7ab88c4023c3 (image=quay.io/ceph/keepalived:2.2.4, name=serene_agnesi, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, release=1793, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, name=keepalived)
Dec  1 04:52:21 np0005540825 systemd[90983]: Starting Mark boot as successful...
Dec  1 04:52:21 np0005540825 systemd[1]: Started libpod-conmon-2e4e7f50b94c14f15c216532328d66e6fa1a2ec512e002ddd78b7ab88c4023c3.scope.
Dec  1 04:52:21 np0005540825 systemd[90983]: Finished Mark boot as successful.
Dec  1 04:52:21 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:52:21 np0005540825 podman[98977]: 2025-12-01 09:52:21.901409673 +0000 UTC m=+0.026164776 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec  1 04:52:22 np0005540825 podman[98977]: 2025-12-01 09:52:22.008500388 +0000 UTC m=+0.133255511 container init 2e4e7f50b94c14f15c216532328d66e6fa1a2ec512e002ddd78b7ab88c4023c3 (image=quay.io/ceph/keepalived:2.2.4, name=serene_agnesi, io.openshift.tags=Ceph keepalived, name=keepalived, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, version=2.2.4, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git)
Dec  1 04:52:22 np0005540825 podman[98977]: 2025-12-01 09:52:22.021364764 +0000 UTC m=+0.146119837 container start 2e4e7f50b94c14f15c216532328d66e6fa1a2ec512e002ddd78b7ab88c4023c3 (image=quay.io/ceph/keepalived:2.2.4, name=serene_agnesi, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, release=1793, io.buildah.version=1.28.2, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, name=keepalived, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64)
Dec  1 04:52:22 np0005540825 podman[98977]: 2025-12-01 09:52:22.025074874 +0000 UTC m=+0.149829937 container attach 2e4e7f50b94c14f15c216532328d66e6fa1a2ec512e002ddd78b7ab88c4023c3 (image=quay.io/ceph/keepalived:2.2.4, name=serene_agnesi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., io.openshift.expose-services=, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, distribution-scope=public, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph)
Dec  1 04:52:22 np0005540825 serene_agnesi[98994]: 0 0
Dec  1 04:52:22 np0005540825 systemd[1]: libpod-2e4e7f50b94c14f15c216532328d66e6fa1a2ec512e002ddd78b7ab88c4023c3.scope: Deactivated successfully.
Dec  1 04:52:22 np0005540825 podman[98977]: 2025-12-01 09:52:22.026440661 +0000 UTC m=+0.151195744 container died 2e4e7f50b94c14f15c216532328d66e6fa1a2ec512e002ddd78b7ab88c4023c3 (image=quay.io/ceph/keepalived:2.2.4, name=serene_agnesi, com.redhat.component=keepalived-container, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, release=1793, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph)
Dec  1 04:52:22 np0005540825 systemd[1]: var-lib-containers-storage-overlay-c5071ba65a92fd8f4ff68c655d2c68a6bc8c3a2d0fa414a27c6a257ca40c8b2b-merged.mount: Deactivated successfully.
Dec  1 04:52:22 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Dec  1 04:52:22 np0005540825 podman[98977]: 2025-12-01 09:52:22.081424832 +0000 UTC m=+0.206179905 container remove 2e4e7f50b94c14f15c216532328d66e6fa1a2ec512e002ddd78b7ab88c4023c3 (image=quay.io/ceph/keepalived:2.2.4, name=serene_agnesi, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, name=keepalived, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, vcs-type=git, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, vendor=Red Hat, Inc.)
Dec  1 04:52:22 np0005540825 systemd[1]: libpod-conmon-2e4e7f50b94c14f15c216532328d66e6fa1a2ec512e002ddd78b7ab88c4023c3.scope: Deactivated successfully.
Dec  1 04:52:22 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Dec  1 04:52:22 np0005540825 systemd[1]: Reloading.
Dec  1 04:52:22 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:22 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:22 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:52:22 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:52:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:52:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:52:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:52:22.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:52:22 np0005540825 systemd[1]: Reloading.
Dec  1 04:52:22 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:52:22 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:52:22 np0005540825 ceph-mon[74416]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  1 04:52:22 np0005540825 ceph-mon[74416]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  1 04:52:22 np0005540825 ceph-mon[74416]: Deploying daemon keepalived.rgw.default.compute-0.jnboao on compute-0
Dec  1 04:52:22 np0005540825 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.jnboao for 365f19c2-81e5-5edd-b6b4-280555214d3a...
Dec  1 04:52:22 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:22 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad08003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:23 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 9.f scrub starts
Dec  1 04:52:23 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 9.f scrub ok
Dec  1 04:52:23 np0005540825 podman[99148]: 2025-12-01 09:52:23.140235257 +0000 UTC m=+0.038817316 container create e119f91dcd39df82addada4d343e0f6e04f1ab131272f54c545ca3c44c4e39e7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-rgw-default-compute-0-jnboao, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, vendor=Red Hat, Inc., name=keepalived, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, version=2.2.4, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, build-date=2023-02-22T09:23:20)
Dec  1 04:52:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:52:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:52:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:52:23.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:52:23 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57f4c8a4b4fec46a09e0d0a9e8462075155fb2cb6735c237663a7138ef10f4c7/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:52:23 np0005540825 podman[99148]: 2025-12-01 09:52:23.200813759 +0000 UTC m=+0.099395848 container init e119f91dcd39df82addada4d343e0f6e04f1ab131272f54c545ca3c44c4e39e7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-rgw-default-compute-0-jnboao, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, description=keepalived for Ceph, name=keepalived, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public)
Dec  1 04:52:23 np0005540825 podman[99148]: 2025-12-01 09:52:23.20565361 +0000 UTC m=+0.104235669 container start e119f91dcd39df82addada4d343e0f6e04f1ab131272f54c545ca3c44c4e39e7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-rgw-default-compute-0-jnboao, io.openshift.expose-services=, version=2.2.4, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., release=1793, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64)
Dec  1 04:52:23 np0005540825 bash[99148]: e119f91dcd39df82addada4d343e0f6e04f1ab131272f54c545ca3c44c4e39e7
Dec  1 04:52:23 np0005540825 podman[99148]: 2025-12-01 09:52:23.123798045 +0000 UTC m=+0.022380134 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec  1 04:52:23 np0005540825 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.jnboao for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 04:52:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-rgw-default-compute-0-jnboao[99163]: Mon Dec  1 09:52:23 2025: Starting Keepalived v2.2.4 (08/21,2021)
Dec  1 04:52:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-rgw-default-compute-0-jnboao[99163]: Mon Dec  1 09:52:23 2025: Running on Linux 5.14.0-642.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025 (built for Linux 5.14.0)
Dec  1 04:52:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-rgw-default-compute-0-jnboao[99163]: Mon Dec  1 09:52:23 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Dec  1 04:52:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-rgw-default-compute-0-jnboao[99163]: Mon Dec  1 09:52:23 2025: Configuration file /etc/keepalived/keepalived.conf
Dec  1 04:52:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-rgw-default-compute-0-jnboao[99163]: Mon Dec  1 09:52:23 2025: Failed to bind to process monitoring socket - errno 98 - Address already in use
Dec  1 04:52:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-rgw-default-compute-0-jnboao[99163]: Mon Dec  1 09:52:23 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Dec  1 04:52:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-rgw-default-compute-0-jnboao[99163]: Mon Dec  1 09:52:23 2025: Starting VRRP child process, pid=4
Dec  1 04:52:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-rgw-default-compute-0-jnboao[99163]: Mon Dec  1 09:52:23 2025: Startup complete
Dec  1 04:52:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-rgw-default-compute-0-jnboao[99163]: Mon Dec  1 09:52:23 2025: (VI_0) Entering BACKUP STATE (init)
Dec  1 04:52:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-nfs-cephfs-compute-0-gzwexr[97165]: Mon Dec  1 09:52:23 2025: (VI_0) Entering BACKUP STATE
Dec  1 04:52:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-rgw-default-compute-0-jnboao[99163]: Mon Dec  1 09:52:23 2025: VRRP_Script(check_backend) succeeded
Dec  1 04:52:23 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:52:23 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:23 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:52:23 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:23 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec  1 04:52:23 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:23 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  1 04:52:23 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  1 04:52:23 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  1 04:52:23 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  1 04:52:23 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.pcdbyn on compute-2
Dec  1 04:52:23 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.pcdbyn on compute-2
Dec  1 04:52:23 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:52:23 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v113: 353 pgs: 2 peering, 351 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 0 B/s wr, 10 op/s; 0 B/s, 6 objects/s recovering
Dec  1 04:52:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:23 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-nfs-cephfs-compute-0-gzwexr[97165]: Mon Dec  1 09:52:23 2025: (VI_0) Entering MASTER STATE
Dec  1 04:52:24 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:24 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:24 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:24 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Dec  1 04:52:24 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Dec  1 04:52:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:24 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad10001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:52:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:52:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:52:24.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:52:24 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Dec  1 04:52:24 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:52:24.368327) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  1 04:52:24 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Dec  1 04:52:24 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764582744368514, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7985, "num_deletes": 252, "total_data_size": 14894720, "memory_usage": 15492864, "flush_reason": "Manual Compaction"}
Dec  1 04:52:24 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Dec  1 04:52:24 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764582744517394, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 12769601, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 146, "largest_seqno": 8122, "table_properties": {"data_size": 12740448, "index_size": 18478, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9605, "raw_key_size": 93416, "raw_average_key_size": 24, "raw_value_size": 12667555, "raw_average_value_size": 3304, "num_data_blocks": 814, "num_entries": 3834, "num_filter_entries": 3834, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764582412, "oldest_key_time": 1764582412, "file_creation_time": 1764582744, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Dec  1 04:52:24 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 149132 microseconds, and 27922 cpu microseconds.
Dec  1 04:52:24 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:52:24.517488) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 12769601 bytes OK
Dec  1 04:52:24 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:52:24.517527) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Dec  1 04:52:24 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:52:24.521022) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Dec  1 04:52:24 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:52:24.521050) EVENT_LOG_v1 {"time_micros": 1764582744521042, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Dec  1 04:52:24 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:52:24.521083) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Dec  1 04:52:24 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 14858121, prev total WAL file size 14858121, number of live WAL files 2.
Dec  1 04:52:24 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 04:52:24 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:52:24.525666) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Dec  1 04:52:24 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Dec  1 04:52:24 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(12MB) 13(57KB) 8(1944B)]
Dec  1 04:52:24 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764582744525751, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 12830039, "oldest_snapshot_seqno": -1}
Dec  1 04:52:24 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3650 keys, 12783770 bytes, temperature: kUnknown
Dec  1 04:52:24 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764582744599532, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 12783770, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12755016, "index_size": 18532, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9157, "raw_key_size": 91537, "raw_average_key_size": 25, "raw_value_size": 12683730, "raw_average_value_size": 3474, "num_data_blocks": 818, "num_entries": 3650, "num_filter_entries": 3650, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764582410, "oldest_key_time": 0, "file_creation_time": 1764582744, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Dec  1 04:52:24 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 04:52:24 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:52:24.599798) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 12783770 bytes
Dec  1 04:52:24 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:52:24.602479) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 173.7 rd, 173.0 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(12.2, 0.0 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3943, records dropped: 293 output_compression: NoCompression
Dec  1 04:52:24 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:52:24.602503) EVENT_LOG_v1 {"time_micros": 1764582744602493, "job": 4, "event": "compaction_finished", "compaction_time_micros": 73875, "compaction_time_cpu_micros": 27311, "output_level": 6, "num_output_files": 1, "total_output_size": 12783770, "num_input_records": 3943, "num_output_records": 3650, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  1 04:52:24 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 04:52:24 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764582744605080, "job": 4, "event": "table_file_deletion", "file_number": 19}
Dec  1 04:52:24 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 04:52:24 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764582744605218, "job": 4, "event": "table_file_deletion", "file_number": 13}
Dec  1 04:52:24 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 04:52:24 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764582744605350, "job": 4, "event": "table_file_deletion", "file_number": 8}
Dec  1 04:52:24 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:52:24.525584) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 04:52:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:24 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c001930 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:25 np0005540825 ceph-mon[74416]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  1 04:52:25 np0005540825 ceph-mon[74416]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  1 04:52:25 np0005540825 ceph-mon[74416]: Deploying daemon keepalived.rgw.default.compute-2.pcdbyn on compute-2
Dec  1 04:52:25 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Dec  1 04:52:25 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Dec  1 04:52:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:52:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:52:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:52:25.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:52:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  1 04:52:25 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  1 04:52:25 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec  1 04:52:25 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:25 np0005540825 ceph-mgr[74709]: [progress INFO root] complete: finished ev 668c2fa3-8c92-438a-a0da-986dbd0d5a14 (Updating ingress.rgw.default deployment (+4 -> 4))
Dec  1 04:52:25 np0005540825 ceph-mgr[74709]: [progress INFO root] Completed event 668c2fa3-8c92-438a-a0da-986dbd0d5a14 (Updating ingress.rgw.default deployment (+4 -> 4)) in 12 seconds
Dec  1 04:52:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec  1 04:52:25 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:25 np0005540825 ceph-mgr[74709]: [progress INFO root] update: starting ev 269eacf6-c8dc-4b22-be4e-c0c55abbeb64 (Updating prometheus deployment (+1 -> 1))
Dec  1 04:52:25 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Deploying daemon prometheus.compute-0 on compute-0
Dec  1 04:52:25 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Deploying daemon prometheus.compute-0 on compute-0
Dec  1 04:52:25 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v114: 353 pgs: 2 peering, 351 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 4.6 KiB/s rd, 0 B/s wr, 9 op/s; 0 B/s, 5 objects/s recovering
Dec  1 04:52:25 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:25 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad08003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:26 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 10.17 deep-scrub starts
Dec  1 04:52:26 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 10.17 deep-scrub ok
Dec  1 04:52:26 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:26 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad08003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:52:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:52:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:52:26.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:52:26 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:26 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:26 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:26 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:26 np0005540825 ceph-mon[74416]: Deploying daemon prometheus.compute-0 on compute-0
Dec  1 04:52:26 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-rgw-default-compute-0-jnboao[99163]: Mon Dec  1 09:52:26 2025: (VI_0) Entering MASTER STATE
Dec  1 04:52:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:26 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad10001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:27 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 10.6 deep-scrub starts
Dec  1 04:52:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:52:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:52:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:52:27.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:52:27 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 10.6 deep-scrub ok
Dec  1 04:52:27 np0005540825 systemd[1]: session-37.scope: Deactivated successfully.
Dec  1 04:52:27 np0005540825 systemd[1]: session-37.scope: Consumed 8.668s CPU time.
Dec  1 04:52:27 np0005540825 systemd-logind[789]: Session 37 logged out. Waiting for processes to exit.
Dec  1 04:52:27 np0005540825 systemd-logind[789]: Removed session 37.
Dec  1 04:52:27 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v115: 353 pgs: 353 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Dec  1 04:52:27 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Dec  1 04:52:27 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec  1 04:52:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:27 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c001930 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:27 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 04:52:27 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:52:27 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 04:52:27 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:52:27 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:52:27 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:52:27 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:52:27 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:52:27 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:52:27 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:52:27 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:52:27 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:52:27 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 04:52:27 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:52:27 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:52:27 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:52:27 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 04:52:27 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:52:27 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 04:52:27 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:52:27 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 04:52:27 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:52:27 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:52:27 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:52:27 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 04:52:28 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Dec  1 04:52:28 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Dec  1 04:52:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:28 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad08003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:52:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:52:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:52:28.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:52:28 np0005540825 ceph-mgr[74709]: [progress INFO root] Writing back 28 completed events
Dec  1 04:52:28 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  1 04:52:28 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:52:28 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Dec  1 04:52:28 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:28 np0005540825 ceph-mgr[74709]: [progress INFO root] Completed event f12034c6-3b97-439e-b1a3-055f291a4f01 (Global Recovery Event) in 28 seconds
Dec  1 04:52:28 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec  1 04:52:28 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Dec  1 04:52:28 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec  1 04:52:28 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Dec  1 04:52:28 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 92 pg[10.1b( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=69/69 les/c/f=70/70/0 sis=92) [1] r=0 lpr=92 pi=[69,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:52:28 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 92 pg[10.b( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=92) [1] r=0 lpr=92 pi=[70,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:52:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:29 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad08003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:52:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:52:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:52:29.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:52:29 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v117: 353 pgs: 353 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Dec  1 04:52:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Dec  1 04:52:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Dec  1 04:52:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:29 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad10001c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Dec  1 04:52:29 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:29 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec  1 04:52:29 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Dec  1 04:52:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec  1 04:52:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Dec  1 04:52:29 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Dec  1 04:52:29 np0005540825 podman[99296]: 2025-12-01 09:52:29.744803506 +0000 UTC m=+3.658390539 volume create 912ca928ec85a176c05867bd9ecffda69b61d71712157b3b934f99b10024bdd8
Dec  1 04:52:29 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 93 pg[10.b( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=93) [1]/[2] r=-1 lpr=93 pi=[70,93)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:29 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 93 pg[10.b( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=93) [1]/[2] r=-1 lpr=93 pi=[70,93)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  1 04:52:29 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 93 pg[10.1b( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=69/69 les/c/f=70/70/0 sis=93) [1]/[2] r=-1 lpr=93 pi=[69,93)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:29 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 93 pg[10.1b( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=69/69 les/c/f=70/70/0 sis=93) [1]/[2] r=-1 lpr=93 pi=[69,93)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  1 04:52:29 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 93 pg[10.1c( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=78/78 les/c/f=79/79/0 sis=93) [1] r=0 lpr=93 pi=[78,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:52:29 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 93 pg[10.c( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=78/78 les/c/f=79/79/0 sis=93) [1] r=0 lpr=93 pi=[78,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:52:29 np0005540825 podman[99296]: 2025-12-01 09:52:29.754401254 +0000 UTC m=+3.667988297 container create 28cec75b105208b737d0e4137197e7b2e0268ccf589868cc986b03b5ae25090f (image=quay.io/prometheus/prometheus:v2.51.0, name=wonderful_wozniak, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:52:29 np0005540825 systemd[1]: Started libpod-conmon-28cec75b105208b737d0e4137197e7b2e0268ccf589868cc986b03b5ae25090f.scope.
Dec  1 04:52:29 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:52:29 np0005540825 podman[99296]: 2025-12-01 09:52:29.730253124 +0000 UTC m=+3.643840187 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Dec  1 04:52:29 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e541ac6416692725a45674c339b95c7715b0c20db8f55c1ba627e8f3357d05bf/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Dec  1 04:52:29 np0005540825 podman[99296]: 2025-12-01 09:52:29.829152598 +0000 UTC m=+3.742739741 container init 28cec75b105208b737d0e4137197e7b2e0268ccf589868cc986b03b5ae25090f (image=quay.io/prometheus/prometheus:v2.51.0, name=wonderful_wozniak, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:52:29 np0005540825 podman[99296]: 2025-12-01 09:52:29.836569428 +0000 UTC m=+3.750156471 container start 28cec75b105208b737d0e4137197e7b2e0268ccf589868cc986b03b5ae25090f (image=quay.io/prometheus/prometheus:v2.51.0, name=wonderful_wozniak, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:52:29 np0005540825 podman[99296]: 2025-12-01 09:52:29.839788805 +0000 UTC m=+3.753375868 container attach 28cec75b105208b737d0e4137197e7b2e0268ccf589868cc986b03b5ae25090f (image=quay.io/prometheus/prometheus:v2.51.0, name=wonderful_wozniak, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:52:29 np0005540825 wonderful_wozniak[99554]: 65534 65534
Dec  1 04:52:29 np0005540825 systemd[1]: libpod-28cec75b105208b737d0e4137197e7b2e0268ccf589868cc986b03b5ae25090f.scope: Deactivated successfully.
Dec  1 04:52:29 np0005540825 podman[99296]: 2025-12-01 09:52:29.840850433 +0000 UTC m=+3.754437506 container died 28cec75b105208b737d0e4137197e7b2e0268ccf589868cc986b03b5ae25090f (image=quay.io/prometheus/prometheus:v2.51.0, name=wonderful_wozniak, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:52:29 np0005540825 systemd[1]: var-lib-containers-storage-overlay-e541ac6416692725a45674c339b95c7715b0c20db8f55c1ba627e8f3357d05bf-merged.mount: Deactivated successfully.
Dec  1 04:52:29 np0005540825 podman[99296]: 2025-12-01 09:52:29.887149161 +0000 UTC m=+3.800736214 container remove 28cec75b105208b737d0e4137197e7b2e0268ccf589868cc986b03b5ae25090f (image=quay.io/prometheus/prometheus:v2.51.0, name=wonderful_wozniak, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:52:29 np0005540825 podman[99296]: 2025-12-01 09:52:29.891243991 +0000 UTC m=+3.804831074 volume remove 912ca928ec85a176c05867bd9ecffda69b61d71712157b3b934f99b10024bdd8
Dec  1 04:52:29 np0005540825 systemd[1]: libpod-conmon-28cec75b105208b737d0e4137197e7b2e0268ccf589868cc986b03b5ae25090f.scope: Deactivated successfully.
Dec  1 04:52:29 np0005540825 podman[99569]: 2025-12-01 09:52:29.973788355 +0000 UTC m=+0.048633812 volume create bf828035e81b5053ed70ba34ebc1088eb4a889b02d302ec81f6f1705894bc521
Dec  1 04:52:29 np0005540825 podman[99569]: 2025-12-01 09:52:29.982554681 +0000 UTC m=+0.057400148 container create a9f5002f9b8a7d609776aa38bead09187210934572e8b80061f3448c454dfdb1 (image=quay.io/prometheus/prometheus:v2.51.0, name=mystifying_shaw, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:52:30 np0005540825 systemd[1]: Started libpod-conmon-a9f5002f9b8a7d609776aa38bead09187210934572e8b80061f3448c454dfdb1.scope.
Dec  1 04:52:30 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:52:30 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/233229791bf92c74ea7471dd554ebd756b2b2a745f63506818037fc5462e0ed4/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Dec  1 04:52:30 np0005540825 podman[99569]: 2025-12-01 09:52:29.952377148 +0000 UTC m=+0.027222645 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Dec  1 04:52:30 np0005540825 podman[99569]: 2025-12-01 09:52:30.045133047 +0000 UTC m=+0.119978514 container init a9f5002f9b8a7d609776aa38bead09187210934572e8b80061f3448c454dfdb1 (image=quay.io/prometheus/prometheus:v2.51.0, name=mystifying_shaw, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:52:30 np0005540825 podman[99569]: 2025-12-01 09:52:30.051749055 +0000 UTC m=+0.126594502 container start a9f5002f9b8a7d609776aa38bead09187210934572e8b80061f3448c454dfdb1 (image=quay.io/prometheus/prometheus:v2.51.0, name=mystifying_shaw, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:52:30 np0005540825 mystifying_shaw[99585]: 65534 65534
Dec  1 04:52:30 np0005540825 systemd[1]: libpod-a9f5002f9b8a7d609776aa38bead09187210934572e8b80061f3448c454dfdb1.scope: Deactivated successfully.
Dec  1 04:52:30 np0005540825 podman[99569]: 2025-12-01 09:52:30.054869869 +0000 UTC m=+0.129715306 container attach a9f5002f9b8a7d609776aa38bead09187210934572e8b80061f3448c454dfdb1 (image=quay.io/prometheus/prometheus:v2.51.0, name=mystifying_shaw, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:52:30 np0005540825 podman[99569]: 2025-12-01 09:52:30.055438384 +0000 UTC m=+0.130283831 container died a9f5002f9b8a7d609776aa38bead09187210934572e8b80061f3448c454dfdb1 (image=quay.io/prometheus/prometheus:v2.51.0, name=mystifying_shaw, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:52:30 np0005540825 systemd[1]: var-lib-containers-storage-overlay-233229791bf92c74ea7471dd554ebd756b2b2a745f63506818037fc5462e0ed4-merged.mount: Deactivated successfully.
Dec  1 04:52:30 np0005540825 podman[99569]: 2025-12-01 09:52:30.099100761 +0000 UTC m=+0.173946198 container remove a9f5002f9b8a7d609776aa38bead09187210934572e8b80061f3448c454dfdb1 (image=quay.io/prometheus/prometheus:v2.51.0, name=mystifying_shaw, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:52:30 np0005540825 podman[99569]: 2025-12-01 09:52:30.103680644 +0000 UTC m=+0.178526101 volume remove bf828035e81b5053ed70ba34ebc1088eb4a889b02d302ec81f6f1705894bc521
Dec  1 04:52:30 np0005540825 systemd[1]: libpod-conmon-a9f5002f9b8a7d609776aa38bead09187210934572e8b80061f3448c454dfdb1.scope: Deactivated successfully.
Dec  1 04:52:30 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:30 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c001930 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:52:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:52:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:52:30.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:52:30 np0005540825 systemd[1]: Reloading.
Dec  1 04:52:30 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:52:30 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:52:30 np0005540825 systemd[1]: Reloading.
Dec  1 04:52:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Dec  1 04:52:30 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:52:30 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:52:30 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec  1 04:52:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Dec  1 04:52:30 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Dec  1 04:52:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 94 pg[10.c( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=78/78 les/c/f=79/79/0 sis=94) [1]/[2] r=-1 lpr=94 pi=[78,94)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 94 pg[10.c( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=78/78 les/c/f=79/79/0 sis=94) [1]/[2] r=-1 lpr=94 pi=[78,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  1 04:52:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 94 pg[10.1c( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=78/78 les/c/f=79/79/0 sis=94) [1]/[2] r=-1 lpr=94 pi=[78,94)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:30 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 94 pg[10.1c( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=78/78 les/c/f=79/79/0 sis=94) [1]/[2] r=-1 lpr=94 pi=[78,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  1 04:52:30 np0005540825 systemd[1]: Starting Ceph prometheus.compute-0 for 365f19c2-81e5-5edd-b6b4-280555214d3a...
Dec  1 04:52:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:31 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad08003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:52:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:52:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:52:31.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:52:31 np0005540825 podman[99730]: 2025-12-01 09:52:31.203890404 +0000 UTC m=+0.046597826 container create f4d1dfb280c04c299aa8be4743fa19bf2fe3a6e302067b3bdeba477b91d1a552 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:52:31 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bae830a513844f463cf2a00da1ce13867828f459710077148c55a859ca602f13/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Dec  1 04:52:31 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bae830a513844f463cf2a00da1ce13867828f459710077148c55a859ca602f13/merged/etc/prometheus supports timestamps until 2038 (0x7fffffff)
Dec  1 04:52:31 np0005540825 podman[99730]: 2025-12-01 09:52:31.261115966 +0000 UTC m=+0.103823408 container init f4d1dfb280c04c299aa8be4743fa19bf2fe3a6e302067b3bdeba477b91d1a552 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:52:31 np0005540825 podman[99730]: 2025-12-01 09:52:31.266074189 +0000 UTC m=+0.108781611 container start f4d1dfb280c04c299aa8be4743fa19bf2fe3a6e302067b3bdeba477b91d1a552 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:52:31 np0005540825 bash[99730]: f4d1dfb280c04c299aa8be4743fa19bf2fe3a6e302067b3bdeba477b91d1a552
Dec  1 04:52:31 np0005540825 podman[99730]: 2025-12-01 09:52:31.182919179 +0000 UTC m=+0.025626621 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Dec  1 04:52:31 np0005540825 systemd[1]: Started Ceph prometheus.compute-0 for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 04:52:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-prometheus-compute-0[99747]: ts=2025-12-01T09:52:31.298Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)"
Dec  1 04:52:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-prometheus-compute-0[99747]: ts=2025-12-01T09:52:31.298Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)"
Dec  1 04:52:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-prometheus-compute-0[99747]: ts=2025-12-01T09:52:31.298Z caller=main.go:623 level=info host_details="(Linux 5.14.0-642.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025 x86_64 compute-0 (none))"
Dec  1 04:52:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-prometheus-compute-0[99747]: ts=2025-12-01T09:52:31.298Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)"
Dec  1 04:52:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-prometheus-compute-0[99747]: ts=2025-12-01T09:52:31.298Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)"
Dec  1 04:52:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-prometheus-compute-0[99747]: ts=2025-12-01T09:52:31.300Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=192.168.122.100:9095
Dec  1 04:52:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-prometheus-compute-0[99747]: ts=2025-12-01T09:52:31.300Z caller=main.go:1129 level=info msg="Starting TSDB ..."
Dec  1 04:52:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-prometheus-compute-0[99747]: ts=2025-12-01T09:52:31.302Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=192.168.122.100:9095
Dec  1 04:52:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-prometheus-compute-0[99747]: ts=2025-12-01T09:52:31.303Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=192.168.122.100:9095
Dec  1 04:52:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-prometheus-compute-0[99747]: ts=2025-12-01T09:52:31.309Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any"
Dec  1 04:52:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-prometheus-compute-0[99747]: ts=2025-12-01T09:52:31.309Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=8.3µs
Dec  1 04:52:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-prometheus-compute-0[99747]: ts=2025-12-01T09:52:31.309Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while"
Dec  1 04:52:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-prometheus-compute-0[99747]: ts=2025-12-01T09:52:31.309Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0
Dec  1 04:52:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-prometheus-compute-0[99747]: ts=2025-12-01T09:52:31.309Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=78.532µs wal_replay_duration=518.834µs wbl_replay_duration=200ns total_replay_duration=699.249µs
Dec  1 04:52:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-prometheus-compute-0[99747]: ts=2025-12-01T09:52:31.314Z caller=main.go:1150 level=info fs_type=XFS_SUPER_MAGIC
Dec  1 04:52:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-prometheus-compute-0[99747]: ts=2025-12-01T09:52:31.314Z caller=main.go:1153 level=info msg="TSDB started"
Dec  1 04:52:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-prometheus-compute-0[99747]: ts=2025-12-01T09:52:31.314Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
Dec  1 04:52:31 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:52:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-prometheus-compute-0[99747]: ts=2025-12-01T09:52:31.356Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=41.831497ms db_storage=1.35µs remote_storage=1.62µs web_handler=570ns query_engine=890ns scrape=4.377978ms scrape_sd=252.007µs notify=27.381µs notify_sd=20.29µs rules=36.174555ms tracing=13.08µs
Dec  1 04:52:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-prometheus-compute-0[99747]: ts=2025-12-01T09:52:31.356Z caller=main.go:1114 level=info msg="Server is ready to receive web requests."
Dec  1 04:52:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-prometheus-compute-0[99747]: ts=2025-12-01T09:52:31.356Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..."
Dec  1 04:52:31 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:31 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:52:31 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:31 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Dec  1 04:52:31 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:31 np0005540825 ceph-mgr[74709]: [progress INFO root] complete: finished ev 269eacf6-c8dc-4b22-be4e-c0c55abbeb64 (Updating prometheus deployment (+1 -> 1))
Dec  1 04:52:31 np0005540825 ceph-mgr[74709]: [progress INFO root] Completed event 269eacf6-c8dc-4b22-be4e-c0c55abbeb64 (Updating prometheus deployment (+1 -> 1)) in 6 seconds
Dec  1 04:52:31 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "prometheus"} v 0)
Dec  1 04:52:31 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Dec  1 04:52:31 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v120: 353 pgs: 2 unknown, 351 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Dec  1 04:52:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:31 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:31 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Dec  1 04:52:31 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:31 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:31 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' 
Dec  1 04:52:31 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Dec  1 04:52:31 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Dec  1 04:52:31 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Dec  1 04:52:31 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 95 pg[10.b( v 56'1015 (0'0,56'1015] local-lis/les=0/0 n=6 ec=61/50 lis/c=93/70 les/c/f=94/71/0 sis=95) [1] r=0 lpr=95 pi=[70,95)/1 luod=0'0 crt=56'1015 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:31 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 95 pg[10.b( v 56'1015 (0'0,56'1015] local-lis/les=0/0 n=6 ec=61/50 lis/c=93/70 les/c/f=94/71/0 sis=95) [1] r=0 lpr=95 pi=[70,95)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:52:31 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 95 pg[10.1b( v 56'1015 (0'0,56'1015] local-lis/les=0/0 n=2 ec=61/50 lis/c=93/69 les/c/f=94/70/0 sis=95) [1] r=0 lpr=95 pi=[69,95)/1 luod=0'0 crt=56'1015 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:31 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 95 pg[10.1b( v 56'1015 (0'0,56'1015] local-lis/les=0/0 n=2 ec=61/50 lis/c=93/69 les/c/f=94/70/0 sis=95) [1] r=0 lpr=95 pi=[69,95)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:52:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:32 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad10001c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:52:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:52:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:52:32.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:52:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Dec  1 04:52:32 np0005540825 ceph-mgr[74709]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec  1 04:52:32 np0005540825 ceph-mgr[74709]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec  1 04:52:32 np0005540825 ceph-mgr[74709]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec  1 04:52:32 np0005540825 ceph-mgr[74709]: mgr respawn  1: '-n'
Dec  1 04:52:32 np0005540825 ceph-mgr[74709]: mgr respawn  2: 'mgr.compute-0.fospow'
Dec  1 04:52:32 np0005540825 ceph-mgr[74709]: mgr respawn  3: '-f'
Dec  1 04:52:32 np0005540825 ceph-mgr[74709]: mgr respawn  4: '--setuser'
Dec  1 04:52:32 np0005540825 ceph-mgr[74709]: mgr respawn  5: 'ceph'
Dec  1 04:52:32 np0005540825 ceph-mgr[74709]: mgr respawn  6: '--setgroup'
Dec  1 04:52:32 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mgrmap e28: compute-0.fospow(active, since 2m), standbys: compute-1.ymizfm, compute-2.kdtkls
Dec  1 04:52:32 np0005540825 systemd[1]: session-35.scope: Deactivated successfully.
Dec  1 04:52:32 np0005540825 systemd[1]: session-35.scope: Consumed 54.760s CPU time.
Dec  1 04:52:32 np0005540825 systemd-logind[789]: Session 35 logged out. Waiting for processes to exit.
Dec  1 04:52:32 np0005540825 systemd-logind[789]: Removed session 35.
Dec  1 04:52:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ignoring --setuser ceph since I am not root
Dec  1 04:52:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ignoring --setgroup ceph since I am not root
Dec  1 04:52:32 np0005540825 ceph-mgr[74709]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec  1 04:52:32 np0005540825 ceph-mgr[74709]: pidfile_write: ignore empty --pid-file
Dec  1 04:52:32 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'alerts'
Dec  1 04:52:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:52:32.721+0000 7f54a9af2140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  1 04:52:32 np0005540825 ceph-mgr[74709]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  1 04:52:32 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'balancer'
Dec  1 04:52:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:52:32.804+0000 7f54a9af2140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  1 04:52:32 np0005540825 ceph-mgr[74709]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  1 04:52:32 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'cephadm'
Dec  1 04:52:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Dec  1 04:52:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:33 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c002da0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:33 np0005540825 ceph-mon[74416]: from='mgr.14394 192.168.122.100:0/1633172299' entity='mgr.compute-0.fospow' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Dec  1 04:52:33 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Dec  1 04:52:33 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Dec  1 04:52:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 96 pg[10.c( v 56'1015 (0'0,56'1015] local-lis/les=0/0 n=5 ec=61/50 lis/c=94/78 les/c/f=95/79/0 sis=96) [1] r=0 lpr=96 pi=[78,96)/1 luod=0'0 crt=56'1015 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 96 pg[10.c( v 56'1015 (0'0,56'1015] local-lis/les=0/0 n=5 ec=61/50 lis/c=94/78 les/c/f=95/79/0 sis=96) [1] r=0 lpr=96 pi=[78,96)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:52:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 96 pg[10.1c( v 56'1015 (0'0,56'1015] local-lis/les=0/0 n=7 ec=61/50 lis/c=94/78 les/c/f=95/79/0 sis=96) [1] r=0 lpr=96 pi=[78,96)/1 luod=0'0 crt=56'1015 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 96 pg[10.1c( v 56'1015 (0'0,56'1015] local-lis/les=0/0 n=7 ec=61/50 lis/c=94/78 les/c/f=95/79/0 sis=96) [1] r=0 lpr=96 pi=[78,96)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:52:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 96 pg[10.b( v 56'1015 (0'0,56'1015] local-lis/les=95/96 n=6 ec=61/50 lis/c=93/70 les/c/f=94/71/0 sis=95) [1] r=0 lpr=95 pi=[70,95)/1 crt=56'1015 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:52:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 96 pg[10.1b( v 56'1015 (0'0,56'1015] local-lis/les=95/96 n=2 ec=61/50 lis/c=93/69 les/c/f=94/70/0 sis=95) [1] r=0 lpr=95 pi=[69,95)/1 crt=56'1015 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:52:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:52:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:52:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:52:33.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:52:33 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 10.b scrub starts
Dec  1 04:52:33 np0005540825 ceph-osd[82809]: log_channel(cluster) log [DBG] : 10.b scrub ok
Dec  1 04:52:33 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'crash'
Dec  1 04:52:33 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:52:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:33 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c002da0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:52:33.673+0000 7f54a9af2140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  1 04:52:33 np0005540825 ceph-mgr[74709]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  1 04:52:33 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'dashboard'
Dec  1 04:52:33 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Dec  1 04:52:33 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:52:33.695682) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  1 04:52:33 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Dec  1 04:52:33 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764582753695756, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 495, "num_deletes": 251, "total_data_size": 701777, "memory_usage": 712360, "flush_reason": "Manual Compaction"}
Dec  1 04:52:33 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Dec  1 04:52:33 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764582753705057, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 698083, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8123, "largest_seqno": 8617, "table_properties": {"data_size": 695126, "index_size": 865, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 6844, "raw_average_key_size": 17, "raw_value_size": 688892, "raw_average_value_size": 1770, "num_data_blocks": 37, "num_entries": 389, "num_filter_entries": 389, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764582745, "oldest_key_time": 1764582745, "file_creation_time": 1764582753, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Dec  1 04:52:33 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 9439 microseconds, and 5601 cpu microseconds.
Dec  1 04:52:33 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 04:52:33 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:52:33.705122) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 698083 bytes OK
Dec  1 04:52:33 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:52:33.705155) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Dec  1 04:52:33 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:52:33.706723) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Dec  1 04:52:33 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:52:33.706747) EVENT_LOG_v1 {"time_micros": 1764582753706739, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  1 04:52:33 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:52:33.706771) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  1 04:52:33 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 698709, prev total WAL file size 698709, number of live WAL files 2.
Dec  1 04:52:33 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 04:52:33 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:52:33.707514) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323532' seq:0, type:0; will stop at (end)
Dec  1 04:52:33 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  1 04:52:33 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(681KB)], [20(12MB)]
Dec  1 04:52:33 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764582753707594, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 13481853, "oldest_snapshot_seqno": -1}
Dec  1 04:52:33 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3514 keys, 13035052 bytes, temperature: kUnknown
Dec  1 04:52:33 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764582753794348, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 13035052, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13007036, "index_size": 18114, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8837, "raw_key_size": 90776, "raw_average_key_size": 25, "raw_value_size": 12937904, "raw_average_value_size": 3681, "num_data_blocks": 782, "num_entries": 3514, "num_filter_entries": 3514, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764582410, "oldest_key_time": 0, "file_creation_time": 1764582753, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Dec  1 04:52:33 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 04:52:33 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:52:33.794650) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 13035052 bytes
Dec  1 04:52:33 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:52:33.796611) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.3 rd, 150.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 12.2 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(38.0) write-amplify(18.7) OK, records in: 4039, records dropped: 525 output_compression: NoCompression
Dec  1 04:52:33 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:52:33.796645) EVENT_LOG_v1 {"time_micros": 1764582753796630, "job": 6, "event": "compaction_finished", "compaction_time_micros": 86838, "compaction_time_cpu_micros": 44585, "output_level": 6, "num_output_files": 1, "total_output_size": 13035052, "num_input_records": 4039, "num_output_records": 3514, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  1 04:52:33 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 04:52:33 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764582753797005, "job": 6, "event": "table_file_deletion", "file_number": 22}
Dec  1 04:52:33 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 04:52:33 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764582753801188, "job": 6, "event": "table_file_deletion", "file_number": 20}
Dec  1 04:52:33 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:52:33.707434) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 04:52:33 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:52:33.801262) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 04:52:33 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:52:33.801269) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 04:52:33 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:52:33.801271) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 04:52:33 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:52:33.801273) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 04:52:33 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:52:33.801275) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 04:52:34 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Dec  1 04:52:34 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Dec  1 04:52:34 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Dec  1 04:52:34 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 97 pg[10.1c( v 56'1015 (0'0,56'1015] local-lis/les=96/97 n=7 ec=61/50 lis/c=94/78 les/c/f=95/79/0 sis=96) [1] r=0 lpr=96 pi=[78,96)/1 crt=56'1015 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:52:34 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 97 pg[10.c( v 56'1015 (0'0,56'1015] local-lis/les=96/97 n=5 ec=61/50 lis/c=94/78 les/c/f=95/79/0 sis=96) [1] r=0 lpr=96 pi=[78,96)/1 crt=56'1015 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:52:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:34 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14004050 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:52:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:52:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:52:34.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:52:34 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'devicehealth'
Dec  1 04:52:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:52:34.369+0000 7f54a9af2140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  1 04:52:34 np0005540825 ceph-mgr[74709]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  1 04:52:34 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'diskprediction_local'
Dec  1 04:52:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  1 04:52:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  1 04:52:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]:  from numpy import show_config as show_numpy_config
Dec  1 04:52:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:52:34.552+0000 7f54a9af2140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  1 04:52:34 np0005540825 ceph-mgr[74709]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  1 04:52:34 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'influx'
Dec  1 04:52:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:52:34.626+0000 7f54a9af2140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  1 04:52:34 np0005540825 ceph-mgr[74709]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  1 04:52:34 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'insights'
Dec  1 04:52:34 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'iostat'
Dec  1 04:52:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:52:34.771+0000 7f54a9af2140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  1 04:52:34 np0005540825 ceph-mgr[74709]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  1 04:52:34 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'k8sevents'
Dec  1 04:52:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:35 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad10001dd0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:35 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'localpool'
Dec  1 04:52:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:52:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:52:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:52:35.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:52:35 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'mds_autoscaler'
Dec  1 04:52:35 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'mirroring'
Dec  1 04:52:35 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'nfs'
Dec  1 04:52:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:35 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c002da0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:52:35.768+0000 7f54a9af2140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  1 04:52:35 np0005540825 ceph-mgr[74709]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  1 04:52:35 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'orchestrator'
Dec  1 04:52:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:52:35.978+0000 7f54a9af2140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  1 04:52:35 np0005540825 ceph-mgr[74709]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  1 04:52:35 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'osd_perf_query'
Dec  1 04:52:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:52:36.051+0000 7f54a9af2140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  1 04:52:36 np0005540825 ceph-mgr[74709]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  1 04:52:36 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'osd_support'
Dec  1 04:52:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:52:36.116+0000 7f54a9af2140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  1 04:52:36 np0005540825 ceph-mgr[74709]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  1 04:52:36 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'pg_autoscaler'
Dec  1 04:52:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:52:36.189+0000 7f54a9af2140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  1 04:52:36 np0005540825 ceph-mgr[74709]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  1 04:52:36 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'progress'
Dec  1 04:52:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:36 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c002da0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:52:36.254+0000 7f54a9af2140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  1 04:52:36 np0005540825 ceph-mgr[74709]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  1 04:52:36 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'prometheus'
Dec  1 04:52:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:52:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 04:52:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:52:36.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 04:52:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:52:36.585+0000 7f54a9af2140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  1 04:52:36 np0005540825 ceph-mgr[74709]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  1 04:52:36 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'rbd_support'
Dec  1 04:52:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:52:36.687+0000 7f54a9af2140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  1 04:52:36 np0005540825 ceph-mgr[74709]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  1 04:52:36 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'restful'
Dec  1 04:52:36 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'rgw'
Dec  1 04:52:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:37 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14004050 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:52:37.130+0000 7f54a9af2140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  1 04:52:37 np0005540825 ceph-mgr[74709]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  1 04:52:37 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'rook'
Dec  1 04:52:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:52:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:52:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:52:37.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:52:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:37 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad10001f70 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:52:37.708+0000 7f54a9af2140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  1 04:52:37 np0005540825 ceph-mgr[74709]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  1 04:52:37 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'selftest'
Dec  1 04:52:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:52:37.787+0000 7f54a9af2140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  1 04:52:37 np0005540825 ceph-mgr[74709]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  1 04:52:37 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'snap_schedule'
Dec  1 04:52:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:52:37.866+0000 7f54a9af2140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  1 04:52:37 np0005540825 ceph-mgr[74709]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  1 04:52:37 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'stats'
Dec  1 04:52:37 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'status'
Dec  1 04:52:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:52:38.029+0000 7f54a9af2140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec  1 04:52:38 np0005540825 ceph-mgr[74709]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec  1 04:52:38 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'telegraf'
Dec  1 04:52:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:52:38.102+0000 7f54a9af2140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  1 04:52:38 np0005540825 ceph-mgr[74709]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  1 04:52:38 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'telemetry'
Dec  1 04:52:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:38 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c002da0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:52:38.269+0000 7f54a9af2140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  1 04:52:38 np0005540825 ceph-mgr[74709]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  1 04:52:38 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'test_orchestrator'
Dec  1 04:52:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:52:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:52:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:52:38.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:52:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:52:38.495+0000 7f54a9af2140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  1 04:52:38 np0005540825 ceph-mgr[74709]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  1 04:52:38 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'volumes'
Dec  1 04:52:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:52:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:52:38.767+0000 7f54a9af2140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  1 04:52:38 np0005540825 ceph-mgr[74709]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  1 04:52:38 np0005540825 ceph-mgr[74709]: mgr[py] Loading python module 'zabbix'
Dec  1 04:52:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:52:38.837+0000 7f54a9af2140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  1 04:52:38 np0005540825 ceph-mgr[74709]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  1 04:52:38 np0005540825 ceph-mon[74416]: log_channel(cluster) log [INF] : Active manager daemon compute-0.fospow restarted
Dec  1 04:52:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Dec  1 04:52:38 np0005540825 ceph-mon[74416]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.fospow
Dec  1 04:52:38 np0005540825 ceph-mgr[74709]: ms_deliver_dispatch: unhandled message 0x560920dcf860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec  1 04:52:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:39 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c002da0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:52:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 04:52:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:52:39.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: mgr handle_mgr_map Activating!
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mgrmap e29: compute-0.fospow(active, starting, since 0.384359s), standbys: compute-1.ymizfm, compute-2.kdtkls
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: mgr handle_mgr_map I am now activating
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.xijran"} v 0)
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.xijran"}]: dispatch
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).mds e13 all = 0
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.ijlzoi"} v 0)
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.ijlzoi"}]: dispatch
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).mds e13 all = 0
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.yoegjc"} v 0)
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.yoegjc"}]: dispatch
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).mds e13 all = 0
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.fospow", "id": "compute-0.fospow"} v 0)
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mgr metadata", "who": "compute-0.fospow", "id": "compute-0.fospow"}]: dispatch
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.ymizfm", "id": "compute-1.ymizfm"} v 0)
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mgr metadata", "who": "compute-1.ymizfm", "id": "compute-1.ymizfm"}]: dispatch
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.kdtkls", "id": "compute-2.kdtkls"} v 0)
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mgr metadata", "who": "compute-2.kdtkls", "id": "compute-2.kdtkls"}]: dispatch
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).mds e13 all = 1
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: Active manager daemon compute-0.fospow restarted
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: Activating manager daemon compute-0.fospow
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.ymizfm restarted
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.ymizfm started
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: balancer
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Starting
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: log_channel(cluster) log [INF] : Manager daemon compute-0.fospow is now available
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_09:52:39
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: cephadm
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: crash
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: dashboard
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: devicehealth
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: iostat
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO access_control] Loading user roles DB version=2
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [devicehealth INFO root] Starting
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: nfs
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: orchestrator
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO sso] Loading SSO DB version=1
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO root] Configured CherryPy, starting engine...
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: pg_autoscaler
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: progress
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [progress INFO root] Loading...
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f5429e26dc0>, <progress.module.GhostEvent object at 0x7f5429e26ca0>, <progress.module.GhostEvent object at 0x7f5429e26c40>, <progress.module.GhostEvent object at 0x7f5429e26df0>, <progress.module.GhostEvent object at 0x7f5429e26e50>, <progress.module.GhostEvent object at 0x7f5429e26e80>, <progress.module.GhostEvent object at 0x7f5429e26eb0>, <progress.module.GhostEvent object at 0x7f5429e26ee0>, <progress.module.GhostEvent object at 0x7f5429e26f10>, <progress.module.GhostEvent object at 0x7f5429e26f40>, <progress.module.GhostEvent object at 0x7f5429e26f70>, <progress.module.GhostEvent object at 0x7f5429e26fa0>, <progress.module.GhostEvent object at 0x7f5429e26fd0>, <progress.module.GhostEvent object at 0x7f5429e31040>, <progress.module.GhostEvent object at 0x7f5429e31070>, <progress.module.GhostEvent object at 0x7f5429e310a0>, <progress.module.GhostEvent object at 0x7f5429e310d0>, <progress.module.GhostEvent object at 0x7f5429e31100>, <progress.module.GhostEvent object at 0x7f5429e31130>, <progress.module.GhostEvent object at 0x7f5429e31160>, <progress.module.GhostEvent object at 0x7f5429e31190>, <progress.module.GhostEvent object at 0x7f5429e311c0>, <progress.module.GhostEvent object at 0x7f5429e311f0>, <progress.module.GhostEvent object at 0x7f5429e31220>, <progress.module.GhostEvent object at 0x7f5429e31250>, <progress.module.GhostEvent object at 0x7f5429e31280>, <progress.module.GhostEvent object at 0x7f5429e312b0>, <progress.module.GhostEvent object at 0x7f5429e312e0>] historic events
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [progress INFO root] Loaded OSDMap, ready.
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: prometheus
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [prometheus INFO root] server_addr: :: server_port: 9283
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [prometheus INFO root] Cache enabled
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [prometheus INFO root] starting metric collection thread
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [prometheus INFO root] Starting engine...
Dec  1 04:52:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: [01/Dec/2025:09:52:39] ENGINE Bus STARTING
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.error] [01/Dec/2025:09:52:39] ENGINE Bus STARTING
Dec  1 04:52:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: CherryPy Checker:
Dec  1 04:52:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: The Application mounted at '' has an empty config.
Dec  1 04:52:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] recovery thread starting
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] starting setup
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: rbd_support
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: restful
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fospow/mirror_snapshot_schedule"} v 0)
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fospow/mirror_snapshot_schedule"}]: dispatch
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [restful INFO root] server_addr: :: server_port: 8003
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: status
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: telemetry
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [restful WARNING root] server not running: no certificate configured
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] PerfHandler: starting
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: mgr load Constructed class from module: volumes
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_task_task: vms, start_after=
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_task_task: volumes, start_after=
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_task_task: backups, start_after=
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_task_task: images, start_after=
Dec  1 04:52:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:52:39.656+0000 7f541100d640 -1 client.0 error registering admin socket command: (17) File exists
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: client.0 error registering admin socket command: (17) File exists
Dec  1 04:52:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:39 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14004050 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:52:39.658+0000 7f5416197640 -1 client.0 error registering admin socket command: (17) File exists
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: client.0 error registering admin socket command: (17) File exists
Dec  1 04:52:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:52:39.658+0000 7f5416197640 -1 client.0 error registering admin socket command: (17) File exists
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: client.0 error registering admin socket command: (17) File exists
Dec  1 04:52:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:52:39.658+0000 7f5416197640 -1 client.0 error registering admin socket command: (17) File exists
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: client.0 error registering admin socket command: (17) File exists
Dec  1 04:52:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:52:39.658+0000 7f5416197640 -1 client.0 error registering admin socket command: (17) File exists
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: client.0 error registering admin socket command: (17) File exists
Dec  1 04:52:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T09:52:39.658+0000 7f5416197640 -1 client.0 error registering admin socket command: (17) File exists
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: client.0 error registering admin socket command: (17) File exists
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TaskHandler: starting
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fospow/trash_purge_schedule"} v 0)
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fospow/trash_purge_schedule"}]: dispatch
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 04:52:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: [01/Dec/2025:09:52:39] ENGINE Serving on http://:::9283
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.error] [01/Dec/2025:09:52:39] ENGINE Serving on http://:::9283
Dec  1 04:52:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: [01/Dec/2025:09:52:39] ENGINE Bus STARTED
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.error] [01/Dec/2025:09:52:39] ENGINE Bus STARTED
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [prometheus INFO root] Engine started.
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] setup complete
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Dec  1 04:52:39 np0005540825 systemd-logind[789]: New session 38 of user ceph-admin.
Dec  1 04:52:39 np0005540825 systemd[1]: Started Session 38 of User ceph-admin.
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.kdtkls restarted
Dec  1 04:52:39 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.kdtkls started
Dec  1 04:52:39 np0005540825 ceph-mgr[74709]: [dashboard INFO dashboard.module] Engine started.
Dec  1 04:52:40 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:40 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad10004060 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:52:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:52:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:52:40.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:52:40 np0005540825 ceph-mon[74416]: Manager daemon compute-0.fospow is now available
Dec  1 04:52:40 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:40 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:40 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fospow/mirror_snapshot_schedule"}]: dispatch
Dec  1 04:52:40 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fospow/trash_purge_schedule"}]: dispatch
Dec  1 04:52:40 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mgrmap e30: compute-0.fospow(active, since 1.5343s), standbys: compute-1.ymizfm, compute-2.kdtkls
Dec  1 04:52:40 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v3: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Dec  1 04:52:40 np0005540825 podman[100096]: 2025-12-01 09:52:40.666830911 +0000 UTC m=+0.070416469 container exec 04e54403a63b389bbbec1024baf07fe083822adf8debfd9c927a52a06f70b8a1 (image=quay.io/ceph/ceph:v19, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mon-compute-0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:52:40 np0005540825 ceph-mgr[74709]: [cephadm INFO cherrypy.error] [01/Dec/2025:09:52:40] ENGINE Bus STARTING
Dec  1 04:52:40 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : [01/Dec/2025:09:52:40] ENGINE Bus STARTING
Dec  1 04:52:40 np0005540825 podman[100096]: 2025-12-01 09:52:40.752858373 +0000 UTC m=+0.156443921 container exec_died 04e54403a63b389bbbec1024baf07fe083822adf8debfd9c927a52a06f70b8a1 (image=quay.io/ceph/ceph:v19, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mon-compute-0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Dec  1 04:52:40 np0005540825 ceph-mgr[74709]: [cephadm INFO cherrypy.error] [01/Dec/2025:09:52:40] ENGINE Serving on http://192.168.122.100:8765
Dec  1 04:52:40 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : [01/Dec/2025:09:52:40] ENGINE Serving on http://192.168.122.100:8765
Dec  1 04:52:40 np0005540825 ceph-mgr[74709]: [cephadm INFO cherrypy.error] [01/Dec/2025:09:52:40] ENGINE Serving on https://192.168.122.100:7150
Dec  1 04:52:40 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : [01/Dec/2025:09:52:40] ENGINE Serving on https://192.168.122.100:7150
Dec  1 04:52:40 np0005540825 ceph-mgr[74709]: [cephadm INFO cherrypy.error] [01/Dec/2025:09:52:40] ENGINE Bus STARTED
Dec  1 04:52:40 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : [01/Dec/2025:09:52:40] ENGINE Bus STARTED
Dec  1 04:52:40 np0005540825 ceph-mgr[74709]: [cephadm INFO cherrypy.error] [01/Dec/2025:09:52:40] ENGINE Client ('192.168.122.100', 52120) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  1 04:52:40 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : [01/Dec/2025:09:52:40] ENGINE Client ('192.168.122.100', 52120) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  1 04:52:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:41 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c002da0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:52:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:52:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:52:41.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:52:41 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v4: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Dec  1 04:52:41 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Dec  1 04:52:41 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Dec  1 04:52:41 np0005540825 podman[100239]: 2025-12-01 09:52:41.288875291 +0000 UTC m=+0.066028362 container exec cd3077bd2d5a007c3a726828ac7eae9ffbb7d553deec632ef7494e1db8acac45 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:52:41 np0005540825 podman[100239]: 2025-12-01 09:52:41.299562875 +0000 UTC m=+0.076715926 container exec_died cd3077bd2d5a007c3a726828ac7eae9ffbb7d553deec632ef7494e1db8acac45 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:52:41 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Dec  1 04:52:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:52:41] "GET /metrics HTTP/1.1" 200 46655 "" "Prometheus/2.51.0"
Dec  1 04:52:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:52:41] "GET /metrics HTTP/1.1" 200 46655 "" "Prometheus/2.51.0"
Dec  1 04:52:41 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Dec  1 04:52:41 np0005540825 ceph-mgr[74709]: [devicehealth INFO root] Check health
Dec  1 04:52:41 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec  1 04:52:41 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Dec  1 04:52:41 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 99 pg[10.1d( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=78/78 les/c/f=79/79/0 sis=99) [1] r=0 lpr=99 pi=[78,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:52:41 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Dec  1 04:52:41 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 99 pg[10.d( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=78/78 les/c/f=79/79/0 sis=99) [1] r=0 lpr=99 pi=[78,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:52:41 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mgrmap e31: compute-0.fospow(active, since 2s), standbys: compute-1.ymizfm, compute-2.kdtkls
Dec  1 04:52:41 np0005540825 podman[100343]: 2025-12-01 09:52:41.641151195 +0000 UTC m=+0.072009131 container exec 385d0b8a0770a5cfcc609cc2d998a61d24533494ce0bce025dda1e75042f6acf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  1 04:52:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:41 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad080040f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:41 np0005540825 podman[100343]: 2025-12-01 09:52:41.66094357 +0000 UTC m=+0.091801496 container exec_died 385d0b8a0770a5cfcc609cc2d998a61d24533494ce0bce025dda1e75042f6acf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  1 04:52:41 np0005540825 podman[100407]: 2025-12-01 09:52:41.91463893 +0000 UTC m=+0.072818403 container exec 0ce6b28b78cdc773acbae8987038033199adf9f2d08be5b101f663b41bdbf569 (image=quay.io/ceph/haproxy:2.3, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd)
Dec  1 04:52:41 np0005540825 podman[100407]: 2025-12-01 09:52:41.926599787 +0000 UTC m=+0.084779180 container exec_died 0ce6b28b78cdc773acbae8987038033199adf9f2d08be5b101f663b41bdbf569 (image=quay.io/ceph/haproxy:2.3, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd)
Dec  1 04:52:42 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  1 04:52:42 np0005540825 podman[100475]: 2025-12-01 09:52:42.214338499 +0000 UTC m=+0.075914424 container exec a5bc912f6140365e8fac95a046d1f1cd854ca55aaf2d1e10454f7fa95d0346ac (image=quay.io/ceph/keepalived:2.2.4, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-nfs-cephfs-compute-0-gzwexr, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, distribution-scope=public, io.openshift.expose-services=, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, build-date=2023-02-22T09:23:20)
Dec  1 04:52:42 np0005540825 podman[100475]: 2025-12-01 09:52:42.233106277 +0000 UTC m=+0.094682222 container exec_died a5bc912f6140365e8fac95a046d1f1cd854ca55aaf2d1e10454f7fa95d0346ac (image=quay.io/ceph/keepalived:2.2.4, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-nfs-cephfs-compute-0-gzwexr, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, architecture=x86_64, com.redhat.component=keepalived-container, vcs-type=git, version=2.2.4, distribution-scope=public, release=1793, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  1 04:52:42 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:42 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:52:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:52:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:52:42.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:52:42 np0005540825 systemd-logind[789]: New session 39 of user zuul.
Dec  1 04:52:42 np0005540825 systemd[1]: Started Session 39 of User zuul.
Dec  1 04:52:42 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:42 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  1 04:52:42 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:42 np0005540825 podman[100545]: 2025-12-01 09:52:42.479161914 +0000 UTC m=+0.064013369 container exec 0511cb329529d79a0314faf710797871465300fa18afe5331763ee944339d662 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:52:42 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Dec  1 04:52:42 np0005540825 podman[100545]: 2025-12-01 09:52:42.510157346 +0000 UTC m=+0.095008791 container exec_died 0511cb329529d79a0314faf710797871465300fa18afe5331763ee944339d662 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:52:42 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  1 04:52:42 np0005540825 ceph-mon[74416]: [01/Dec/2025:09:52:40] ENGINE Bus STARTING
Dec  1 04:52:42 np0005540825 ceph-mon[74416]: [01/Dec/2025:09:52:40] ENGINE Serving on http://192.168.122.100:8765
Dec  1 04:52:42 np0005540825 ceph-mon[74416]: [01/Dec/2025:09:52:40] ENGINE Serving on https://192.168.122.100:7150
Dec  1 04:52:42 np0005540825 ceph-mon[74416]: [01/Dec/2025:09:52:40] ENGINE Bus STARTED
Dec  1 04:52:42 np0005540825 ceph-mon[74416]: [01/Dec/2025:09:52:40] ENGINE Client ('192.168.122.100', 52120) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  1 04:52:42 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec  1 04:52:42 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:43 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad10004060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:43 np0005540825 python3.9[100735]: ansible-ansible.legacy.ping Invoked with data=pong
Dec  1 04:52:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:52:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:52:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:52:43.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:52:43 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v6: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Dec  1 04:52:43 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Dec  1 04:52:43 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec  1 04:52:43 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  1 04:52:43 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Dec  1 04:52:43 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:43 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Dec  1 04:52:43 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  1 04:52:43 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 100 pg[10.1d( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=78/78 les/c/f=79/79/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[78,100)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:43 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 100 pg[10.1d( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=78/78 les/c/f=79/79/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[78,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  1 04:52:43 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 100 pg[10.d( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=78/78 les/c/f=79/79/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[78,100)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:43 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 100 pg[10.d( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=78/78 les/c/f=79/79/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[78,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  1 04:52:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:43 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c002da0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:43 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:43 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mgrmap e32: compute-0.fospow(active, since 4s), standbys: compute-1.ymizfm, compute-2.kdtkls
Dec  1 04:52:43 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  1 04:52:43 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:52:43 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:43 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:43 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec  1 04:52:43 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  1 04:52:43 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Dec  1 04:52:43 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:43 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec  1 04:52:43 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:43 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:43 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:43 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:43 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  1 04:52:43 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Dec  1 04:52:43 np0005540825 podman[100848]: 2025-12-01 09:52:43.889441891 +0000 UTC m=+0.070501481 container exec 6eb1185f94a74a666c6b5c09efc32bc1424dea31547c65157a432674ce35a678 (image=quay.io/ceph/grafana:10.4.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 04:52:44 np0005540825 podman[100848]: 2025-12-01 09:52:44.084374482 +0000 UTC m=+0.265434122 container exec_died 6eb1185f94a74a666c6b5c09efc32bc1424dea31547c65157a432674ce35a678 (image=quay.io/ceph/grafana:10.4.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 04:52:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:44 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c002da0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:52:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:52:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:52:44.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:52:44 np0005540825 podman[101057]: 2025-12-01 09:52:44.516951396 +0000 UTC m=+0.081507233 container exec f4d1dfb280c04c299aa8be4743fa19bf2fe3a6e302067b3bdeba477b91d1a552 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:52:44 np0005540825 podman[101057]: 2025-12-01 09:52:44.569829269 +0000 UTC m=+0.134385056 container exec_died f4d1dfb280c04c299aa8be4743fa19bf2fe3a6e302067b3bdeba477b91d1a552 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:52:44 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Dec  1 04:52:44 np0005540825 python3.9[101030]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:52:44 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec  1 04:52:44 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Dec  1 04:52:44 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Dec  1 04:52:44 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:52:44 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:44 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:52:44 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:44 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  1 04:52:44 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:44 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  1 04:52:44 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:44 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec  1 04:52:44 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  1 04:52:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:45 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad080040f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:52:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:52:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:52:45.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:52:45 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v9: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Dec  1 04:52:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Dec  1 04:52:45 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec  1 04:52:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:45 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:45 np0005540825 python3.9[101387]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:52:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Dec  1 04:52:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:52:45 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec  1 04:52:45 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:45 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:45 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:45 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:45 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  1 04:52:45 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec  1 04:52:45 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec  1 04:52:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Dec  1 04:52:45 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:45 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Dec  1 04:52:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:52:45 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 102 pg[10.1d( v 56'1015 (0'0,56'1015] local-lis/les=0/0 n=5 ec=61/50 lis/c=100/78 les/c/f=101/79/0 sis=102) [1] r=0 lpr=102 pi=[78,102)/1 luod=0'0 crt=56'1015 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:45 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 102 pg[10.1d( v 56'1015 (0'0,56'1015] local-lis/les=0/0 n=5 ec=61/50 lis/c=100/78 les/c/f=101/79/0 sis=102) [1] r=0 lpr=102 pi=[78,102)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:52:45 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 102 pg[10.d( v 56'1015 (0'0,56'1015] local-lis/les=0/0 n=8 ec=61/50 lis/c=100/78 les/c/f=101/79/0 sis=102) [1] r=0 lpr=102 pi=[78,102)/1 luod=0'0 crt=56'1015 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:45 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 102 pg[10.d( v 56'1015 (0'0,56'1015] local-lis/les=0/0 n=8 ec=61/50 lis/c=100/78 les/c/f=101/79/0 sis=102) [1] r=0 lpr=102 pi=[78,102)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:52:45 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 102 pg[10.f( v 56'1015 (0'0,56'1015] local-lis/les=82/83 n=7 ec=61/50 lis/c=82/82 les/c/f=83/83/0 sis=102 pruub=8.260265350s) [2] r=-1 lpr=102 pi=[82,102)/1 crt=56'1015 mlcod 0'0 active pruub 266.544433594s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:45 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 102 pg[10.f( v 56'1015 (0'0,56'1015] local-lis/les=82/83 n=7 ec=61/50 lis/c=82/82 les/c/f=83/83/0 sis=102 pruub=8.260240555s) [2] r=-1 lpr=102 pi=[82,102)/1 crt=56'1015 mlcod 0'0 unknown NOTIFY pruub 266.544433594s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:52:45 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 102 pg[10.1f( v 56'1015 (0'0,56'1015] local-lis/les=82/83 n=5 ec=61/50 lis/c=82/82 les/c/f=83/83/0 sis=102 pruub=8.259071350s) [2] r=-1 lpr=102 pi=[82,102)/1 crt=56'1015 mlcod 0'0 active pruub 266.544433594s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:45 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 102 pg[10.1f( v 56'1015 (0'0,56'1015] local-lis/les=82/83 n=5 ec=61/50 lis/c=82/82 les/c/f=83/83/0 sis=102 pruub=8.259039879s) [2] r=-1 lpr=102 pi=[82,102)/1 crt=56'1015 mlcod 0'0 unknown NOTIFY pruub 266.544433594s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:52:45 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec  1 04:52:45 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  1 04:52:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:52:45 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:52:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 04:52:45 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 04:52:45 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec  1 04:52:45 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec  1 04:52:45 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec  1 04:52:45 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec  1 04:52:45 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec  1 04:52:45 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec  1 04:52:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:46 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad10004060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:52:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:52:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:52:46.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:52:46 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.conf
Dec  1 04:52:46 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.conf
Dec  1 04:52:46 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.conf
Dec  1 04:52:46 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.conf
Dec  1 04:52:46 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.conf
Dec  1 04:52:46 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.conf
Dec  1 04:52:46 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Dec  1 04:52:46 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec  1 04:52:46 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:46 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:46 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  1 04:52:46 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 04:52:46 np0005540825 ceph-mon[74416]: Updating compute-0:/etc/ceph/ceph.conf
Dec  1 04:52:46 np0005540825 ceph-mon[74416]: Updating compute-1:/etc/ceph/ceph.conf
Dec  1 04:52:46 np0005540825 ceph-mon[74416]: Updating compute-2:/etc/ceph/ceph.conf
Dec  1 04:52:46 np0005540825 ceph-mon[74416]: Updating compute-0:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.conf
Dec  1 04:52:46 np0005540825 ceph-mon[74416]: Updating compute-2:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.conf
Dec  1 04:52:46 np0005540825 ceph-mon[74416]: Updating compute-1:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.conf
Dec  1 04:52:46 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Dec  1 04:52:46 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Dec  1 04:52:46 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 103 pg[10.f( v 56'1015 (0'0,56'1015] local-lis/les=82/83 n=7 ec=61/50 lis/c=82/82 les/c/f=83/83/0 sis=103) [2]/[1] r=0 lpr=103 pi=[82,103)/1 crt=56'1015 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:46 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 103 pg[10.f( v 56'1015 (0'0,56'1015] local-lis/les=82/83 n=7 ec=61/50 lis/c=82/82 les/c/f=83/83/0 sis=103) [2]/[1] r=0 lpr=103 pi=[82,103)/1 crt=56'1015 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  1 04:52:46 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 103 pg[10.1f( v 56'1015 (0'0,56'1015] local-lis/les=82/83 n=5 ec=61/50 lis/c=82/82 les/c/f=83/83/0 sis=103) [2]/[1] r=0 lpr=103 pi=[82,103)/1 crt=56'1015 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:46 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 103 pg[10.1f( v 56'1015 (0'0,56'1015] local-lis/les=82/83 n=5 ec=61/50 lis/c=82/82 les/c/f=83/83/0 sis=103) [2]/[1] r=0 lpr=103 pi=[82,103)/1 crt=56'1015 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  1 04:52:46 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 103 pg[10.d( v 56'1015 (0'0,56'1015] local-lis/les=102/103 n=8 ec=61/50 lis/c=100/78 les/c/f=101/79/0 sis=102) [1] r=0 lpr=102 pi=[78,102)/1 crt=56'1015 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:52:46 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 103 pg[10.1d( v 56'1015 (0'0,56'1015] local-lis/les=102/103 n=5 ec=61/50 lis/c=100/78 les/c/f=101/79/0 sis=102) [1] r=0 lpr=102 pi=[78,102)/1 crt=56'1015 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:52:46 np0005540825 python3.9[101902]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:52:47 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  1 04:52:47 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  1 04:52:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:47 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad10004060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:47 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  1 04:52:47 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  1 04:52:47 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  1 04:52:47 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  1 04:52:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:52:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:52:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:52:47.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:52:47 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v12: 353 pgs: 2 active+remapped, 351 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 18 op/s; 82 B/s, 3 objects/s recovering
Dec  1 04:52:47 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Dec  1 04:52:47 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Dec  1 04:52:47 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.client.admin.keyring
Dec  1 04:52:47 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.client.admin.keyring
Dec  1 04:52:47 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.client.admin.keyring
Dec  1 04:52:47 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.client.admin.keyring
Dec  1 04:52:47 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.client.admin.keyring
Dec  1 04:52:47 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.client.admin.keyring
Dec  1 04:52:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:47 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c002da0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:47 np0005540825 python3.9[102432]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:52:48 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Dec  1 04:52:48 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:52:48 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  1 04:52:48 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  1 04:52:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:48 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c002da0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:52:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:52:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:52:48.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:52:48 np0005540825 python3.9[102758]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:52:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:49 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c002da0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:52:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:52:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:52:49.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:52:49 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v13: 353 pgs: 2 active+remapped, 351 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 13 op/s; 58 B/s, 2 objects/s recovering
Dec  1 04:52:49 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Dec  1 04:52:49 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Dec  1 04:52:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:49 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:49 np0005540825 python3.9[102910]: ansible-ansible.builtin.service_facts Invoked
Dec  1 04:52:49 np0005540825 network[102927]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  1 04:52:49 np0005540825 network[102928]: 'network-scripts' will be removed from distribution in near future.
Dec  1 04:52:49 np0005540825 network[102929]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  1 04:52:49 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec  1 04:52:49 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Dec  1 04:52:50 np0005540825 ceph-mon[74416]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  1 04:52:50 np0005540825 ceph-mon[74416]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  1 04:52:50 np0005540825 ceph-mon[74416]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  1 04:52:50 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Dec  1 04:52:50 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:50 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Dec  1 04:52:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:52:50 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 104 pg[10.10( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=2 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=104 pruub=13.998267174s) [2] r=-1 lpr=104 pi=[61,104)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 276.499267578s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:50 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 104 pg[10.10( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=2 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=104 pruub=13.998229980s) [2] r=-1 lpr=104 pi=[61,104)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 276.499267578s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:52:50 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 104 pg[10.f( v 56'1015 (0'0,56'1015] local-lis/les=103/104 n=7 ec=61/50 lis/c=82/82 les/c/f=83/83/0 sis=103) [2]/[1] async=[2] r=0 lpr=103 pi=[82,103)/1 crt=56'1015 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:52:50 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 104 pg[10.1f( v 56'1015 (0'0,56'1015] local-lis/les=103/104 n=5 ec=61/50 lis/c=82/82 les/c/f=83/83/0 sis=103) [2]/[1] async=[2] r=0 lpr=103 pi=[82,103)/1 crt=56'1015 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:52:50 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  1 04:52:50 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  1 04:52:50 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:50 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:50 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 04:52:50 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 04:52:50 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:50 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad10004060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:52:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:52:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:52:50.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:52:50 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 04:52:50 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 04:52:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 04:52:50 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 04:52:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:52:50 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:52:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Dec  1 04:52:51 np0005540825 ceph-mon[74416]: Updating compute-0:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.client.admin.keyring
Dec  1 04:52:51 np0005540825 ceph-mon[74416]: Updating compute-2:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.client.admin.keyring
Dec  1 04:52:51 np0005540825 ceph-mon[74416]: Updating compute-1:/var/lib/ceph/365f19c2-81e5-5edd-b6b4-280555214d3a/config/ceph.client.admin.keyring
Dec  1 04:52:51 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Dec  1 04:52:51 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec  1 04:52:51 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:51 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:51 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:51 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:51 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:51 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:51 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:51 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:51 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 04:52:51 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec  1 04:52:51 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Dec  1 04:52:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:51 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad10004060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:51 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Dec  1 04:52:51 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 105 pg[10.f( v 56'1015 (0'0,56'1015] local-lis/les=103/104 n=7 ec=61/50 lis/c=103/82 les/c/f=104/83/0 sis=105 pruub=15.069115639s) [2] async=[2] r=-1 lpr=105 pi=[82,105)/1 crt=56'1015 mlcod 56'1015 active pruub 278.539520264s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:51 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 105 pg[10.f( v 56'1015 (0'0,56'1015] local-lis/les=103/104 n=7 ec=61/50 lis/c=103/82 les/c/f=104/83/0 sis=105 pruub=15.069049835s) [2] r=-1 lpr=105 pi=[82,105)/1 crt=56'1015 mlcod 0'0 unknown NOTIFY pruub 278.539520264s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:52:51 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 105 pg[10.10( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=2 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=105) [2]/[1] r=0 lpr=105 pi=[61,105)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:51 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 105 pg[10.10( v 56'1015 (0'0,56'1015] local-lis/les=61/62 n=2 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=105) [2]/[1] r=0 lpr=105 pi=[61,105)/1 crt=56'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  1 04:52:51 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 105 pg[10.1f( v 56'1015 (0'0,56'1015] local-lis/les=103/104 n=5 ec=61/50 lis/c=103/82 les/c/f=104/83/0 sis=105 pruub=15.068728447s) [2] async=[2] r=-1 lpr=105 pi=[82,105)/1 crt=56'1015 mlcod 56'1015 active pruub 278.539581299s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:51 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 105 pg[10.1f( v 56'1015 (0'0,56'1015] local-lis/les=103/104 n=5 ec=61/50 lis/c=103/82 les/c/f=104/83/0 sis=105 pruub=15.068688393s) [2] r=-1 lpr=105 pi=[82,105)/1 crt=56'1015 mlcod 0'0 unknown NOTIFY pruub 278.539581299s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:52:51 np0005540825 podman[103043]: 2025-12-01 09:52:51.064747756 +0000 UTC m=+0.050814659 container create 7783cc0442ed163c752f0fce0c5a65e28514675b7a8eb79ad37cf9bfefa6be04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  1 04:52:51 np0005540825 systemd[1]: Started libpod-conmon-7783cc0442ed163c752f0fce0c5a65e28514675b7a8eb79ad37cf9bfefa6be04.scope.
Dec  1 04:52:51 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:52:51 np0005540825 podman[103043]: 2025-12-01 09:52:51.039109066 +0000 UTC m=+0.025175989 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:52:51 np0005540825 podman[103043]: 2025-12-01 09:52:51.141221825 +0000 UTC m=+0.127288748 container init 7783cc0442ed163c752f0fce0c5a65e28514675b7a8eb79ad37cf9bfefa6be04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_hofstadter, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec  1 04:52:51 np0005540825 podman[103043]: 2025-12-01 09:52:51.147261065 +0000 UTC m=+0.133327968 container start 7783cc0442ed163c752f0fce0c5a65e28514675b7a8eb79ad37cf9bfefa6be04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2)
Dec  1 04:52:51 np0005540825 podman[103043]: 2025-12-01 09:52:51.150245634 +0000 UTC m=+0.136312527 container attach 7783cc0442ed163c752f0fce0c5a65e28514675b7a8eb79ad37cf9bfefa6be04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Dec  1 04:52:51 np0005540825 great_hofstadter[103064]: 167 167
Dec  1 04:52:51 np0005540825 systemd[1]: libpod-7783cc0442ed163c752f0fce0c5a65e28514675b7a8eb79ad37cf9bfefa6be04.scope: Deactivated successfully.
Dec  1 04:52:51 np0005540825 conmon[103064]: conmon 7783cc0442ed163c752f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7783cc0442ed163c752f0fce0c5a65e28514675b7a8eb79ad37cf9bfefa6be04.scope/container/memory.events
Dec  1 04:52:51 np0005540825 podman[103043]: 2025-12-01 09:52:51.152578476 +0000 UTC m=+0.138645379 container died 7783cc0442ed163c752f0fce0c5a65e28514675b7a8eb79ad37cf9bfefa6be04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_hofstadter, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  1 04:52:51 np0005540825 systemd[1]: var-lib-containers-storage-overlay-ef646920907357b15b121c3d3c8b4147448880a2148311d6aea4792742c6e877-merged.mount: Deactivated successfully.
Dec  1 04:52:51 np0005540825 podman[103043]: 2025-12-01 09:52:51.197363904 +0000 UTC m=+0.183430827 container remove 7783cc0442ed163c752f0fce0c5a65e28514675b7a8eb79ad37cf9bfefa6be04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_hofstadter, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325)
Dec  1 04:52:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:52:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:52:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:52:51.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:52:51 np0005540825 systemd[1]: libpod-conmon-7783cc0442ed163c752f0fce0c5a65e28514675b7a8eb79ad37cf9bfefa6be04.scope: Deactivated successfully.
Dec  1 04:52:51 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v16: 353 pgs: 1 active+recovering+remapped, 1 unknown, 1 active+remapped, 350 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 189 B/s rd, 0 op/s; 4/221 objects misplaced (1.810%); 40 B/s, 1 objects/s recovering
Dec  1 04:52:51 np0005540825 podman[103096]: 2025-12-01 09:52:51.350550007 +0000 UTC m=+0.038128452 container create 59da449c30db705d74d84a2bf33be42a872675f6f34431d1f549dbfad51edcff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_raman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  1 04:52:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:52:51] "GET /metrics HTTP/1.1" 200 46655 "" "Prometheus/2.51.0"
Dec  1 04:52:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:52:51] "GET /metrics HTTP/1.1" 200 46655 "" "Prometheus/2.51.0"
Dec  1 04:52:51 np0005540825 systemd[1]: Started libpod-conmon-59da449c30db705d74d84a2bf33be42a872675f6f34431d1f549dbfad51edcff.scope.
Dec  1 04:52:51 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:52:51 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d045edec01e3c15915b37f31ac5184296367a8f7b62f3d0bd68f689f8451dae1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 04:52:51 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d045edec01e3c15915b37f31ac5184296367a8f7b62f3d0bd68f689f8451dae1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:52:51 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d045edec01e3c15915b37f31ac5184296367a8f7b62f3d0bd68f689f8451dae1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:52:51 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d045edec01e3c15915b37f31ac5184296367a8f7b62f3d0bd68f689f8451dae1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 04:52:51 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d045edec01e3c15915b37f31ac5184296367a8f7b62f3d0bd68f689f8451dae1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 04:52:51 np0005540825 podman[103096]: 2025-12-01 09:52:51.334988204 +0000 UTC m=+0.022566639 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:52:51 np0005540825 podman[103096]: 2025-12-01 09:52:51.436459046 +0000 UTC m=+0.124037491 container init 59da449c30db705d74d84a2bf33be42a872675f6f34431d1f549dbfad51edcff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_raman, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec  1 04:52:51 np0005540825 podman[103096]: 2025-12-01 09:52:51.442455715 +0000 UTC m=+0.130034150 container start 59da449c30db705d74d84a2bf33be42a872675f6f34431d1f549dbfad51edcff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_raman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:52:51 np0005540825 podman[103096]: 2025-12-01 09:52:51.446245275 +0000 UTC m=+0.133823720 container attach 59da449c30db705d74d84a2bf33be42a872675f6f34431d1f549dbfad51edcff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:52:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:51 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c002da0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:51 np0005540825 practical_raman[103112]: --> passed data devices: 0 physical, 1 LVM
Dec  1 04:52:51 np0005540825 practical_raman[103112]: --> All data devices are unavailable
Dec  1 04:52:51 np0005540825 systemd[1]: libpod-59da449c30db705d74d84a2bf33be42a872675f6f34431d1f549dbfad51edcff.scope: Deactivated successfully.
Dec  1 04:52:51 np0005540825 podman[103096]: 2025-12-01 09:52:51.791992016 +0000 UTC m=+0.479570451 container died 59da449c30db705d74d84a2bf33be42a872675f6f34431d1f549dbfad51edcff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_raman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  1 04:52:51 np0005540825 systemd[1]: var-lib-containers-storage-overlay-d045edec01e3c15915b37f31ac5184296367a8f7b62f3d0bd68f689f8451dae1-merged.mount: Deactivated successfully.
Dec  1 04:52:51 np0005540825 podman[103096]: 2025-12-01 09:52:51.851821363 +0000 UTC m=+0.539399818 container remove 59da449c30db705d74d84a2bf33be42a872675f6f34431d1f549dbfad51edcff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_raman, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:52:51 np0005540825 systemd[1]: libpod-conmon-59da449c30db705d74d84a2bf33be42a872675f6f34431d1f549dbfad51edcff.scope: Deactivated successfully.
Dec  1 04:52:52 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Dec  1 04:52:52 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec  1 04:52:52 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Dec  1 04:52:52 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Dec  1 04:52:52 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 106 pg[10.10( v 56'1015 (0'0,56'1015] local-lis/les=105/106 n=2 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=105) [2]/[1] async=[2] r=0 lpr=105 pi=[61,105)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:52:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:52 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:52:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:52:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:52:52.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:52:52 np0005540825 podman[103238]: 2025-12-01 09:52:52.422156032 +0000 UTC m=+0.020573447 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:52:52 np0005540825 podman[103238]: 2025-12-01 09:52:52.533540706 +0000 UTC m=+0.131958101 container create 7ebec080d2db8f8c7b80902027b2b673c3183c23e00cab135709f04f42759380 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_allen, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:52:52 np0005540825 systemd[1]: Started libpod-conmon-7ebec080d2db8f8c7b80902027b2b673c3183c23e00cab135709f04f42759380.scope.
Dec  1 04:52:52 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:52:52 np0005540825 podman[103238]: 2025-12-01 09:52:52.856721789 +0000 UTC m=+0.455139204 container init 7ebec080d2db8f8c7b80902027b2b673c3183c23e00cab135709f04f42759380 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_allen, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec  1 04:52:52 np0005540825 podman[103238]: 2025-12-01 09:52:52.863866538 +0000 UTC m=+0.462283933 container start 7ebec080d2db8f8c7b80902027b2b673c3183c23e00cab135709f04f42759380 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_allen, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid)
Dec  1 04:52:52 np0005540825 funny_allen[103274]: 167 167
Dec  1 04:52:52 np0005540825 systemd[1]: libpod-7ebec080d2db8f8c7b80902027b2b673c3183c23e00cab135709f04f42759380.scope: Deactivated successfully.
Dec  1 04:52:52 np0005540825 podman[103238]: 2025-12-01 09:52:52.909344765 +0000 UTC m=+0.507762250 container attach 7ebec080d2db8f8c7b80902027b2b673c3183c23e00cab135709f04f42759380 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_allen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  1 04:52:52 np0005540825 podman[103238]: 2025-12-01 09:52:52.910827144 +0000 UTC m=+0.509244579 container died 7ebec080d2db8f8c7b80902027b2b673c3183c23e00cab135709f04f42759380 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_allen, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec  1 04:52:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:53 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad200008d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:53 np0005540825 systemd[1]: var-lib-containers-storage-overlay-00b4a96450f26080d67afc4c2c5c39a4c4f57fa315c26a824a2d943e45910aeb-merged.mount: Deactivated successfully.
Dec  1 04:52:53 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Dec  1 04:52:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:52:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:52:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:52:53.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:52:53 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v18: 353 pgs: 1 active+recovering+remapped, 1 unknown, 1 active+remapped, 350 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s; 4/221 objects misplaced (1.810%); 36 B/s, 1 objects/s recovering
Dec  1 04:52:53 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Dec  1 04:52:53 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Dec  1 04:52:53 np0005540825 podman[103238]: 2025-12-01 09:52:53.3989302 +0000 UTC m=+0.997347635 container remove 7ebec080d2db8f8c7b80902027b2b673c3183c23e00cab135709f04f42759380 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec  1 04:52:53 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 107 pg[10.10( v 56'1015 (0'0,56'1015] local-lis/les=105/106 n=2 ec=61/50 lis/c=105/61 les/c/f=106/62/0 sis=107 pruub=14.747655869s) [2] async=[2] r=-1 lpr=107 pi=[61,107)/1 crt=56'1015 lcod 0'0 mlcod 0'0 active pruub 280.598144531s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:52:53 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 107 pg[10.10( v 56'1015 (0'0,56'1015] local-lis/les=105/106 n=2 ec=61/50 lis/c=105/61 les/c/f=106/62/0 sis=107 pruub=14.746932983s) [2] r=-1 lpr=107 pi=[61,107)/1 crt=56'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 280.598144531s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:52:53 np0005540825 systemd[1]: libpod-conmon-7ebec080d2db8f8c7b80902027b2b673c3183c23e00cab135709f04f42759380.scope: Deactivated successfully.
Dec  1 04:52:53 np0005540825 podman[103343]: 2025-12-01 09:52:53.606898707 +0000 UTC m=+0.039768476 container create 6db2d4f6ebc817e052ee03efe336e3f2e766121f850b0b61b37850945a30bcbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_lalande, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:52:53 np0005540825 systemd[1]: Started libpod-conmon-6db2d4f6ebc817e052ee03efe336e3f2e766121f850b0b61b37850945a30bcbc.scope.
Dec  1 04:52:53 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:52:53 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13b98c731cc8eb2b3d86c65c0ccef1ad5ac5a1cef9895939f80f19cb5f27bfdd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 04:52:53 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13b98c731cc8eb2b3d86c65c0ccef1ad5ac5a1cef9895939f80f19cb5f27bfdd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:52:53 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13b98c731cc8eb2b3d86c65c0ccef1ad5ac5a1cef9895939f80f19cb5f27bfdd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:52:53 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13b98c731cc8eb2b3d86c65c0ccef1ad5ac5a1cef9895939f80f19cb5f27bfdd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 04:52:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:53 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad200008d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:53 np0005540825 podman[103343]: 2025-12-01 09:52:53.667229887 +0000 UTC m=+0.100099686 container init 6db2d4f6ebc817e052ee03efe336e3f2e766121f850b0b61b37850945a30bcbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:52:53 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:52:53 np0005540825 podman[103343]: 2025-12-01 09:52:53.675908957 +0000 UTC m=+0.108778726 container start 6db2d4f6ebc817e052ee03efe336e3f2e766121f850b0b61b37850945a30bcbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec  1 04:52:53 np0005540825 podman[103343]: 2025-12-01 09:52:53.679896313 +0000 UTC m=+0.112766112 container attach 6db2d4f6ebc817e052ee03efe336e3f2e766121f850b0b61b37850945a30bcbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_lalande, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:52:53 np0005540825 podman[103343]: 2025-12-01 09:52:53.587656916 +0000 UTC m=+0.020526715 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:52:53 np0005540825 pedantic_lalande[103362]: {
Dec  1 04:52:53 np0005540825 pedantic_lalande[103362]:    "1": [
Dec  1 04:52:53 np0005540825 pedantic_lalande[103362]:        {
Dec  1 04:52:53 np0005540825 pedantic_lalande[103362]:            "devices": [
Dec  1 04:52:53 np0005540825 pedantic_lalande[103362]:                "/dev/loop3"
Dec  1 04:52:53 np0005540825 pedantic_lalande[103362]:            ],
Dec  1 04:52:53 np0005540825 pedantic_lalande[103362]:            "lv_name": "ceph_lv0",
Dec  1 04:52:53 np0005540825 pedantic_lalande[103362]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 04:52:53 np0005540825 pedantic_lalande[103362]:            "lv_size": "21470642176",
Dec  1 04:52:53 np0005540825 pedantic_lalande[103362]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 04:52:53 np0005540825 pedantic_lalande[103362]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 04:52:53 np0005540825 pedantic_lalande[103362]:            "name": "ceph_lv0",
Dec  1 04:52:53 np0005540825 pedantic_lalande[103362]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 04:52:53 np0005540825 pedantic_lalande[103362]:            "tags": {
Dec  1 04:52:53 np0005540825 pedantic_lalande[103362]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 04:52:53 np0005540825 pedantic_lalande[103362]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 04:52:53 np0005540825 pedantic_lalande[103362]:                "ceph.cephx_lockbox_secret": "",
Dec  1 04:52:53 np0005540825 pedantic_lalande[103362]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 04:52:53 np0005540825 pedantic_lalande[103362]:                "ceph.cluster_name": "ceph",
Dec  1 04:52:53 np0005540825 pedantic_lalande[103362]:                "ceph.crush_device_class": "",
Dec  1 04:52:53 np0005540825 pedantic_lalande[103362]:                "ceph.encrypted": "0",
Dec  1 04:52:53 np0005540825 pedantic_lalande[103362]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 04:52:53 np0005540825 pedantic_lalande[103362]:                "ceph.osd_id": "1",
Dec  1 04:52:53 np0005540825 pedantic_lalande[103362]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 04:52:53 np0005540825 pedantic_lalande[103362]:                "ceph.type": "block",
Dec  1 04:52:53 np0005540825 pedantic_lalande[103362]:                "ceph.vdo": "0",
Dec  1 04:52:53 np0005540825 pedantic_lalande[103362]:                "ceph.with_tpm": "0"
Dec  1 04:52:53 np0005540825 pedantic_lalande[103362]:            },
Dec  1 04:52:53 np0005540825 pedantic_lalande[103362]:            "type": "block",
Dec  1 04:52:53 np0005540825 pedantic_lalande[103362]:            "vg_name": "ceph_vg0"
Dec  1 04:52:53 np0005540825 pedantic_lalande[103362]:        }
Dec  1 04:52:53 np0005540825 pedantic_lalande[103362]:    ]
Dec  1 04:52:53 np0005540825 pedantic_lalande[103362]: }
Dec  1 04:52:54 np0005540825 systemd[1]: libpod-6db2d4f6ebc817e052ee03efe336e3f2e766121f850b0b61b37850945a30bcbc.scope: Deactivated successfully.
Dec  1 04:52:54 np0005540825 podman[103343]: 2025-12-01 09:52:54.003012454 +0000 UTC m=+0.435882253 container died 6db2d4f6ebc817e052ee03efe336e3f2e766121f850b0b61b37850945a30bcbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:52:54 np0005540825 systemd[1]: var-lib-containers-storage-overlay-13b98c731cc8eb2b3d86c65c0ccef1ad5ac5a1cef9895939f80f19cb5f27bfdd-merged.mount: Deactivated successfully.
Dec  1 04:52:54 np0005540825 podman[103343]: 2025-12-01 09:52:54.055169987 +0000 UTC m=+0.488039766 container remove 6db2d4f6ebc817e052ee03efe336e3f2e766121f850b0b61b37850945a30bcbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_lalande, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 04:52:54 np0005540825 systemd[1]: libpod-conmon-6db2d4f6ebc817e052ee03efe336e3f2e766121f850b0b61b37850945a30bcbc.scope: Deactivated successfully.
Dec  1 04:52:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:54 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad200008d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:52:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:52:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:52:54.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:52:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Dec  1 04:52:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Dec  1 04:52:54 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Dec  1 04:52:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 04:52:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 04:52:54 np0005540825 python3.9[103585]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:52:54 np0005540825 podman[103627]: 2025-12-01 09:52:54.661422488 +0000 UTC m=+0.041480181 container create 89253ec822a7dce85d01e859eec86b1f911f66f4693c1b9f02a59d03fd78294c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_driscoll, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:52:54 np0005540825 systemd[1]: Started libpod-conmon-89253ec822a7dce85d01e859eec86b1f911f66f4693c1b9f02a59d03fd78294c.scope.
Dec  1 04:52:54 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:52:54 np0005540825 podman[103627]: 2025-12-01 09:52:54.645524017 +0000 UTC m=+0.025581730 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:52:54 np0005540825 podman[103627]: 2025-12-01 09:52:54.746901546 +0000 UTC m=+0.126959249 container init 89253ec822a7dce85d01e859eec86b1f911f66f4693c1b9f02a59d03fd78294c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:52:54 np0005540825 podman[103627]: 2025-12-01 09:52:54.753130871 +0000 UTC m=+0.133188564 container start 89253ec822a7dce85d01e859eec86b1f911f66f4693c1b9f02a59d03fd78294c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Dec  1 04:52:54 np0005540825 podman[103627]: 2025-12-01 09:52:54.756822029 +0000 UTC m=+0.136879742 container attach 89253ec822a7dce85d01e859eec86b1f911f66f4693c1b9f02a59d03fd78294c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:52:54 np0005540825 suspicious_driscoll[103668]: 167 167
Dec  1 04:52:54 np0005540825 systemd[1]: libpod-89253ec822a7dce85d01e859eec86b1f911f66f4693c1b9f02a59d03fd78294c.scope: Deactivated successfully.
Dec  1 04:52:54 np0005540825 podman[103677]: 2025-12-01 09:52:54.820013575 +0000 UTC m=+0.039746405 container died 89253ec822a7dce85d01e859eec86b1f911f66f4693c1b9f02a59d03fd78294c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_driscoll, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:52:54 np0005540825 systemd[1]: var-lib-containers-storage-overlay-f942dab5414fc29f131c71439a88b4a326f75890ae081b42ea0b52b22871f19f-merged.mount: Deactivated successfully.
Dec  1 04:52:54 np0005540825 podman[103677]: 2025-12-01 09:52:54.868754318 +0000 UTC m=+0.088487178 container remove 89253ec822a7dce85d01e859eec86b1f911f66f4693c1b9f02a59d03fd78294c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec  1 04:52:54 np0005540825 systemd[1]: libpod-conmon-89253ec822a7dce85d01e859eec86b1f911f66f4693c1b9f02a59d03fd78294c.scope: Deactivated successfully.
Dec  1 04:52:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:55 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad34001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:55 np0005540825 podman[103776]: 2025-12-01 09:52:55.070632643 +0000 UTC m=+0.059230072 container create b859b7264793a5cad536938b122de0b2b1b085075701dc820230ec2d48103be6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_kepler, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec  1 04:52:55 np0005540825 systemd[1]: Started libpod-conmon-b859b7264793a5cad536938b122de0b2b1b085075701dc820230ec2d48103be6.scope.
Dec  1 04:52:55 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:52:55 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb7f7fca3c7df550dfc9bbcee1d986654bd2f5acd61a9adac73cb5e147044a5c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 04:52:55 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb7f7fca3c7df550dfc9bbcee1d986654bd2f5acd61a9adac73cb5e147044a5c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:52:55 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb7f7fca3c7df550dfc9bbcee1d986654bd2f5acd61a9adac73cb5e147044a5c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:52:55 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb7f7fca3c7df550dfc9bbcee1d986654bd2f5acd61a9adac73cb5e147044a5c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 04:52:55 np0005540825 podman[103776]: 2025-12-01 09:52:55.051471894 +0000 UTC m=+0.040069343 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:52:55 np0005540825 podman[103776]: 2025-12-01 09:52:55.157753983 +0000 UTC m=+0.146351412 container init b859b7264793a5cad536938b122de0b2b1b085075701dc820230ec2d48103be6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec  1 04:52:55 np0005540825 podman[103776]: 2025-12-01 09:52:55.166460225 +0000 UTC m=+0.155057644 container start b859b7264793a5cad536938b122de0b2b1b085075701dc820230ec2d48103be6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_kepler, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:52:55 np0005540825 podman[103776]: 2025-12-01 09:52:55.170456581 +0000 UTC m=+0.159054010 container attach b859b7264793a5cad536938b122de0b2b1b085075701dc820230ec2d48103be6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_kepler, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 04:52:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:52:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:52:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:52:55.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:52:55 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v21: 353 pgs: 1 active+recovering+remapped, 1 unknown, 1 active+remapped, 350 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 4/221 objects misplaced (1.810%)
Dec  1 04:52:55 np0005540825 python3.9[103834]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:52:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:55 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c002da0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:55 np0005540825 lvm[103941]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 04:52:55 np0005540825 lvm[103941]: VG ceph_vg0 finished
Dec  1 04:52:55 np0005540825 affectionate_kepler[103837]: {}
Dec  1 04:52:55 np0005540825 systemd[1]: libpod-b859b7264793a5cad536938b122de0b2b1b085075701dc820230ec2d48103be6.scope: Deactivated successfully.
Dec  1 04:52:55 np0005540825 systemd[1]: libpod-b859b7264793a5cad536938b122de0b2b1b085075701dc820230ec2d48103be6.scope: Consumed 1.089s CPU time.
Dec  1 04:52:55 np0005540825 podman[103776]: 2025-12-01 09:52:55.845047094 +0000 UTC m=+0.833644513 container died b859b7264793a5cad536938b122de0b2b1b085075701dc820230ec2d48103be6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_kepler, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  1 04:52:55 np0005540825 systemd[1]: var-lib-containers-storage-overlay-eb7f7fca3c7df550dfc9bbcee1d986654bd2f5acd61a9adac73cb5e147044a5c-merged.mount: Deactivated successfully.
Dec  1 04:52:55 np0005540825 podman[103776]: 2025-12-01 09:52:55.932786602 +0000 UTC m=+0.921384021 container remove b859b7264793a5cad536938b122de0b2b1b085075701dc820230ec2d48103be6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:52:55 np0005540825 systemd[1]: libpod-conmon-b859b7264793a5cad536938b122de0b2b1b085075701dc820230ec2d48103be6.scope: Deactivated successfully.
Dec  1 04:52:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:52:55 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:52:56 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:56 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Dec  1 04:52:56 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:56 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Dec  1 04:52:56 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Dec  1 04:52:56 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec  1 04:52:56 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  1 04:52:56 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec  1 04:52:56 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec  1 04:52:56 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:52:56 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:52:56 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Dec  1 04:52:56 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Dec  1 04:52:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:56 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14004210 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:52:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:52:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:52:56.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:52:56 np0005540825 podman[104197]: 2025-12-01 09:52:56.662155759 +0000 UTC m=+0.049498544 container create 0f2ff8e5e639af896f052ab89709148253d6f48629792e3ffeb828bdd27cad33 (image=quay.io/ceph/ceph:v19, name=tender_wilson, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  1 04:52:56 np0005540825 python3.9[104181]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:52:56 np0005540825 systemd[1]: Started libpod-conmon-0f2ff8e5e639af896f052ab89709148253d6f48629792e3ffeb828bdd27cad33.scope.
Dec  1 04:52:56 np0005540825 podman[104197]: 2025-12-01 09:52:56.639010625 +0000 UTC m=+0.026353450 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:52:56 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:52:56 np0005540825 podman[104197]: 2025-12-01 09:52:56.762250244 +0000 UTC m=+0.149593039 container init 0f2ff8e5e639af896f052ab89709148253d6f48629792e3ffeb828bdd27cad33 (image=quay.io/ceph/ceph:v19, name=tender_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  1 04:52:56 np0005540825 podman[104197]: 2025-12-01 09:52:56.768931281 +0000 UTC m=+0.156274056 container start 0f2ff8e5e639af896f052ab89709148253d6f48629792e3ffeb828bdd27cad33 (image=quay.io/ceph/ceph:v19, name=tender_wilson, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  1 04:52:56 np0005540825 podman[104197]: 2025-12-01 09:52:56.772134566 +0000 UTC m=+0.159477341 container attach 0f2ff8e5e639af896f052ab89709148253d6f48629792e3ffeb828bdd27cad33 (image=quay.io/ceph/ceph:v19, name=tender_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:52:56 np0005540825 tender_wilson[104218]: 167 167
Dec  1 04:52:56 np0005540825 podman[104197]: 2025-12-01 09:52:56.773957994 +0000 UTC m=+0.161300779 container died 0f2ff8e5e639af896f052ab89709148253d6f48629792e3ffeb828bdd27cad33 (image=quay.io/ceph/ceph:v19, name=tender_wilson, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:52:56 np0005540825 systemd[1]: libpod-0f2ff8e5e639af896f052ab89709148253d6f48629792e3ffeb828bdd27cad33.scope: Deactivated successfully.
Dec  1 04:52:56 np0005540825 systemd[1]: var-lib-containers-storage-overlay-f7017e9e8f4c0199b8803fe5a2ee40dfae144f38c5d6e82612bdae50384a6e6a-merged.mount: Deactivated successfully.
Dec  1 04:52:56 np0005540825 podman[104197]: 2025-12-01 09:52:56.812571178 +0000 UTC m=+0.199913953 container remove 0f2ff8e5e639af896f052ab89709148253d6f48629792e3ffeb828bdd27cad33 (image=quay.io/ceph/ceph:v19, name=tender_wilson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec  1 04:52:56 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:56 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:56 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:56 np0005540825 ceph-mon[74416]: Reconfiguring mon.compute-0 (monmap changed)...
Dec  1 04:52:56 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  1 04:52:56 np0005540825 ceph-mon[74416]: Reconfiguring daemon mon.compute-0 on compute-0
Dec  1 04:52:56 np0005540825 systemd[1]: libpod-conmon-0f2ff8e5e639af896f052ab89709148253d6f48629792e3ffeb828bdd27cad33.scope: Deactivated successfully.
Dec  1 04:52:56 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:52:56 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:56 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:52:56 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:56 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.fospow (monmap changed)...
Dec  1 04:52:56 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.fospow", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec  1 04:52:56 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.fospow", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  1 04:52:56 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.fospow (monmap changed)...
Dec  1 04:52:56 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  1 04:52:56 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  1 04:52:56 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:52:56 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:52:56 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.fospow on compute-0
Dec  1 04:52:56 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.fospow on compute-0
Dec  1 04:52:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:57 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad200008d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:52:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:52:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:52:57.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:52:57 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v22: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering
Dec  1 04:52:57 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Dec  1 04:52:57 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Dec  1 04:52:57 np0005540825 podman[104337]: 2025-12-01 09:52:57.30879772 +0000 UTC m=+0.034433624 container create b88b791eaea54173a0b6a3b4e9ba5f05fe2b66f5bdb6364c64380ecd85a97aa0 (image=quay.io/ceph/ceph:v19, name=kind_morse, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:52:57 np0005540825 systemd[1]: Started libpod-conmon-b88b791eaea54173a0b6a3b4e9ba5f05fe2b66f5bdb6364c64380ecd85a97aa0.scope.
Dec  1 04:52:57 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:52:57 np0005540825 podman[104337]: 2025-12-01 09:52:57.368168275 +0000 UTC m=+0.093804199 container init b88b791eaea54173a0b6a3b4e9ba5f05fe2b66f5bdb6364c64380ecd85a97aa0 (image=quay.io/ceph/ceph:v19, name=kind_morse, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Dec  1 04:52:57 np0005540825 podman[104337]: 2025-12-01 09:52:57.373901997 +0000 UTC m=+0.099537901 container start b88b791eaea54173a0b6a3b4e9ba5f05fe2b66f5bdb6364c64380ecd85a97aa0 (image=quay.io/ceph/ceph:v19, name=kind_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:52:57 np0005540825 kind_morse[104398]: 167 167
Dec  1 04:52:57 np0005540825 systemd[1]: libpod-b88b791eaea54173a0b6a3b4e9ba5f05fe2b66f5bdb6364c64380ecd85a97aa0.scope: Deactivated successfully.
Dec  1 04:52:57 np0005540825 podman[104337]: 2025-12-01 09:52:57.377716178 +0000 UTC m=+0.103352082 container attach b88b791eaea54173a0b6a3b4e9ba5f05fe2b66f5bdb6364c64380ecd85a97aa0 (image=quay.io/ceph/ceph:v19, name=kind_morse, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:52:57 np0005540825 podman[104337]: 2025-12-01 09:52:57.378584081 +0000 UTC m=+0.104219995 container died b88b791eaea54173a0b6a3b4e9ba5f05fe2b66f5bdb6364c64380ecd85a97aa0 (image=quay.io/ceph/ceph:v19, name=kind_morse, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  1 04:52:57 np0005540825 podman[104337]: 2025-12-01 09:52:57.295314982 +0000 UTC m=+0.020950906 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  1 04:52:57 np0005540825 systemd[1]: var-lib-containers-storage-overlay-522d127005b1c28190484c0ad79939ded8b3bebc507b749db024b149e294f322-merged.mount: Deactivated successfully.
Dec  1 04:52:57 np0005540825 podman[104337]: 2025-12-01 09:52:57.414498414 +0000 UTC m=+0.140134318 container remove b88b791eaea54173a0b6a3b4e9ba5f05fe2b66f5bdb6364c64380ecd85a97aa0 (image=quay.io/ceph/ceph:v19, name=kind_morse, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:52:57 np0005540825 systemd[1]: libpod-conmon-b88b791eaea54173a0b6a3b4e9ba5f05fe2b66f5bdb6364c64380ecd85a97aa0.scope: Deactivated successfully.
Dec  1 04:52:57 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:52:57 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:57 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:52:57 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:57 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Dec  1 04:52:57 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Dec  1 04:52:57 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec  1 04:52:57 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  1 04:52:57 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:52:57 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:52:57 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Dec  1 04:52:57 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Dec  1 04:52:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:57 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad34002eb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:57 np0005540825 python3.9[104514]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 04:52:57 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:57 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:57 np0005540825 ceph-mon[74416]: Reconfiguring mgr.compute-0.fospow (monmap changed)...
Dec  1 04:52:57 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.fospow", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  1 04:52:57 np0005540825 ceph-mon[74416]: Reconfiguring daemon mgr.compute-0.fospow on compute-0
Dec  1 04:52:57 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Dec  1 04:52:57 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:57 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:57 np0005540825 ceph-mon[74416]: Reconfiguring crash.compute-0 (monmap changed)...
Dec  1 04:52:57 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  1 04:52:57 np0005540825 ceph-mon[74416]: Reconfiguring daemon crash.compute-0 on compute-0
Dec  1 04:52:57 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Dec  1 04:52:57 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec  1 04:52:57 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Dec  1 04:52:57 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Dec  1 04:52:58 np0005540825 podman[104560]: 2025-12-01 09:52:58.007671038 +0000 UTC m=+0.035815781 container create 4b3afdee71ac6786da27d25ad66a12a383858f6c5401fb58cfce050b886ab6cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_maxwell, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  1 04:52:58 np0005540825 systemd[1]: Started libpod-conmon-4b3afdee71ac6786da27d25ad66a12a383858f6c5401fb58cfce050b886ab6cd.scope.
Dec  1 04:52:58 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:52:58 np0005540825 podman[104560]: 2025-12-01 09:52:57.991810247 +0000 UTC m=+0.019955010 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:52:58 np0005540825 podman[104560]: 2025-12-01 09:52:58.094257645 +0000 UTC m=+0.122402428 container init 4b3afdee71ac6786da27d25ad66a12a383858f6c5401fb58cfce050b886ab6cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_maxwell, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  1 04:52:58 np0005540825 podman[104560]: 2025-12-01 09:52:58.100604993 +0000 UTC m=+0.128749736 container start 4b3afdee71ac6786da27d25ad66a12a383858f6c5401fb58cfce050b886ab6cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  1 04:52:58 np0005540825 systemd[1]: libpod-4b3afdee71ac6786da27d25ad66a12a383858f6c5401fb58cfce050b886ab6cd.scope: Deactivated successfully.
Dec  1 04:52:58 np0005540825 blissful_maxwell[104576]: 167 167
Dec  1 04:52:58 np0005540825 conmon[104576]: conmon 4b3afdee71ac6786da27 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4b3afdee71ac6786da27d25ad66a12a383858f6c5401fb58cfce050b886ab6cd.scope/container/memory.events
Dec  1 04:52:58 np0005540825 podman[104560]: 2025-12-01 09:52:58.10917679 +0000 UTC m=+0.137321543 container attach 4b3afdee71ac6786da27d25ad66a12a383858f6c5401fb58cfce050b886ab6cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  1 04:52:58 np0005540825 podman[104560]: 2025-12-01 09:52:58.10954889 +0000 UTC m=+0.137693643 container died 4b3afdee71ac6786da27d25ad66a12a383858f6c5401fb58cfce050b886ab6cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_maxwell, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:52:58 np0005540825 systemd[1]: var-lib-containers-storage-overlay-be4b4fd592bf0304b40e8c444b5fffdca60a514b697956497bbd066c056f4486-merged.mount: Deactivated successfully.
Dec  1 04:52:58 np0005540825 podman[104560]: 2025-12-01 09:52:58.157563434 +0000 UTC m=+0.185708177 container remove 4b3afdee71ac6786da27d25ad66a12a383858f6c5401fb58cfce050b886ab6cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_maxwell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:52:58 np0005540825 systemd[1]: libpod-conmon-4b3afdee71ac6786da27d25ad66a12a383858f6c5401fb58cfce050b886ab6cd.scope: Deactivated successfully.
Dec  1 04:52:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:52:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:52:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:58 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Dec  1 04:52:58 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Dec  1 04:52:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Dec  1 04:52:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec  1 04:52:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:52:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:52:58 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on compute-0
Dec  1 04:52:58 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on compute-0
Dec  1 04:52:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:58 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c002da0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:52:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:52:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:52:58.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:52:58 np0005540825 podman[104739]: 2025-12-01 09:52:58.660310869 +0000 UTC m=+0.041248855 container create 898ed0ba68b3ec2100e1afd9df2abaca623395879461e6d8a28d62c071b6591c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_elbakyan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec  1 04:52:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:52:58 np0005540825 systemd[1]: Started libpod-conmon-898ed0ba68b3ec2100e1afd9df2abaca623395879461e6d8a28d62c071b6591c.scope.
Dec  1 04:52:58 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:52:58 np0005540825 podman[104739]: 2025-12-01 09:52:58.641204943 +0000 UTC m=+0.022142959 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:52:58 np0005540825 podman[104739]: 2025-12-01 09:52:58.744553654 +0000 UTC m=+0.125491670 container init 898ed0ba68b3ec2100e1afd9df2abaca623395879461e6d8a28d62c071b6591c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_elbakyan, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec  1 04:52:58 np0005540825 podman[104739]: 2025-12-01 09:52:58.753293926 +0000 UTC m=+0.134231912 container start 898ed0ba68b3ec2100e1afd9df2abaca623395879461e6d8a28d62c071b6591c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:52:58 np0005540825 podman[104739]: 2025-12-01 09:52:58.756888481 +0000 UTC m=+0.137826497 container attach 898ed0ba68b3ec2100e1afd9df2abaca623395879461e6d8a28d62c071b6591c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  1 04:52:58 np0005540825 sleepy_elbakyan[104756]: 167 167
Dec  1 04:52:58 np0005540825 podman[104739]: 2025-12-01 09:52:58.761054022 +0000 UTC m=+0.141992008 container died 898ed0ba68b3ec2100e1afd9df2abaca623395879461e6d8a28d62c071b6591c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_elbakyan, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:52:58 np0005540825 systemd[1]: libpod-898ed0ba68b3ec2100e1afd9df2abaca623395879461e6d8a28d62c071b6591c.scope: Deactivated successfully.
Dec  1 04:52:58 np0005540825 python3.9[104723]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 04:52:58 np0005540825 systemd[1]: var-lib-containers-storage-overlay-d43f823212c3e10585903c7645dae8bc38563a7c05dbf62b4f487e7df430d378-merged.mount: Deactivated successfully.
Dec  1 04:52:58 np0005540825 podman[104739]: 2025-12-01 09:52:58.800084057 +0000 UTC m=+0.181022053 container remove 898ed0ba68b3ec2100e1afd9df2abaca623395879461e6d8a28d62c071b6591c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  1 04:52:58 np0005540825 systemd[1]: libpod-conmon-898ed0ba68b3ec2100e1afd9df2abaca623395879461e6d8a28d62c071b6591c.scope: Deactivated successfully.
Dec  1 04:52:58 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec  1 04:52:58 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:58 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:58 np0005540825 ceph-mon[74416]: Reconfiguring osd.1 (monmap changed)...
Dec  1 04:52:58 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec  1 04:52:58 np0005540825 ceph-mon[74416]: Reconfiguring daemon osd.1 on compute-0
Dec  1 04:52:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:52:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:52:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:52:58 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Reconfiguring node-exporter.compute-0 (unknown last config time)...
Dec  1 04:52:58 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Reconfiguring node-exporter.compute-0 (unknown last config time)...
Dec  1 04:52:58 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Reconfiguring daemon node-exporter.compute-0 on compute-0
Dec  1 04:52:58 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Reconfiguring daemon node-exporter.compute-0 on compute-0
Dec  1 04:52:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:59 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14004230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:52:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:52:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:52:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:52:59.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:52:59 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v24: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering
Dec  1 04:52:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Dec  1 04:52:59 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Dec  1 04:52:59 np0005540825 systemd[1]: Stopping Ceph node-exporter.compute-0 for 365f19c2-81e5-5edd-b6b4-280555214d3a...
Dec  1 04:52:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:52:59 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad200008d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:00 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad34002eb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:53:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:53:00.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:53:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Dec  1 04:53:00 np0005540825 podman[104889]: 2025-12-01 09:53:00.603570864 +0000 UTC m=+0.779733873 container died cd3077bd2d5a007c3a726828ac7eae9ffbb7d553deec632ef7494e1db8acac45 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:53:00 np0005540825 systemd[1]: var-lib-containers-storage-overlay-1cffbe57ed3d8c2bbb0a46291afd0f52d8bf60d8bc0b99b2b31c1db3ee4744b8-merged.mount: Deactivated successfully.
Dec  1 04:53:00 np0005540825 podman[104889]: 2025-12-01 09:53:00.646339378 +0000 UTC m=+0.822502407 container remove cd3077bd2d5a007c3a726828ac7eae9ffbb7d553deec632ef7494e1db8acac45 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:53:00 np0005540825 bash[104889]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0
Dec  1 04:53:00 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@node-exporter.compute-0.service: Main process exited, code=exited, status=143/n/a
Dec  1 04:53:00 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:00 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:00 np0005540825 ceph-mon[74416]: Reconfiguring node-exporter.compute-0 (unknown last config time)...
Dec  1 04:53:00 np0005540825 ceph-mon[74416]: Reconfiguring daemon node-exporter.compute-0 on compute-0
Dec  1 04:53:00 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Dec  1 04:53:00 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec  1 04:53:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Dec  1 04:53:00 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Dec  1 04:53:00 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@node-exporter.compute-0.service: Failed with result 'exit-code'.
Dec  1 04:53:00 np0005540825 systemd[1]: Stopped Ceph node-exporter.compute-0 for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 04:53:00 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@node-exporter.compute-0.service: Consumed 2.331s CPU time.
Dec  1 04:53:00 np0005540825 systemd[1]: Starting Ceph node-exporter.compute-0 for 365f19c2-81e5-5edd-b6b4-280555214d3a...
Dec  1 04:53:01 np0005540825 podman[105003]: 2025-12-01 09:53:01.009041959 +0000 UTC m=+0.038609635 container create 6f6cf01cf4add71c311676e9908aca30b90b94b7eb4eed46b57a6078721d520f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:01 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:01 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afc6a6a4e86337c877a462b565543c76cbc03d6ffa76d4afa92b52517e7412a9/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Dec  1 04:53:01 np0005540825 podman[105003]: 2025-12-01 09:53:01.067304354 +0000 UTC m=+0.096872050 container init 6f6cf01cf4add71c311676e9908aca30b90b94b7eb4eed46b57a6078721d520f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:53:01 np0005540825 podman[105003]: 2025-12-01 09:53:01.072989775 +0000 UTC m=+0.102557451 container start 6f6cf01cf4add71c311676e9908aca30b90b94b7eb4eed46b57a6078721d520f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:53:01 np0005540825 bash[105003]: 6f6cf01cf4add71c311676e9908aca30b90b94b7eb4eed46b57a6078721d520f
Dec  1 04:53:01 np0005540825 podman[105003]: 2025-12-01 09:53:00.991676228 +0000 UTC m=+0.021243924 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.078Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.078Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.078Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.079Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.079Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.079Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=arp
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=bcache
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=bonding
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=btrfs
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=conntrack
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=cpu
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=diskstats
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=dmi
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=edac
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=entropy
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=filefd
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=filesystem
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=hwmon
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=infiniband
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=ipvs
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=loadavg
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=mdadm
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=meminfo
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=netclass
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=netdev
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=netstat
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=nfs
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=nfsd
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=nvme
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=os
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=pressure
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=rapl
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=schedstat
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=selinux
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=sockstat
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=softnet
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=stat
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=tapestats
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=textfile
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=thermal_zone
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=time
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=uname
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=vmstat
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=xfs
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.080Z caller=node_exporter.go:117 level=info collector=zfs
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.081Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0[105019]: ts=2025-12-01T09:53:01.081Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Dec  1 04:53:01 np0005540825 systemd[1]: Started Ceph node-exporter.compute-0 for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 04:53:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:53:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:53:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:53:01.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:53:01 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v26: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 448 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering
Dec  1 04:53:01 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Dec  1 04:53:01 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Dec  1 04:53:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:53:01 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:01 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Reconfiguring alertmanager.compute-0 (dependencies changed)...
Dec  1 04:53:01 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Reconfiguring alertmanager.compute-0 (dependencies changed)...
Dec  1 04:53:01 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Reconfiguring daemon alertmanager.compute-0 on compute-0
Dec  1 04:53:01 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Reconfiguring daemon alertmanager.compute-0 on compute-0
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:53:01] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Dec  1 04:53:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:53:01] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Dec  1 04:53:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:01 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14004250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:01 np0005540825 podman[105106]: 2025-12-01 09:53:01.705198775 +0000 UTC m=+0.025850727 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec  1 04:53:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Dec  1 04:53:01 np0005540825 podman[105106]: 2025-12-01 09:53:01.84489051 +0000 UTC m=+0.165542472 volume create 188cbd19abbbd9a034e3d6bf00a60fe627e461abe806aabdd252cea3fc640bdd
Dec  1 04:53:01 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec  1 04:53:01 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:01 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Dec  1 04:53:01 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:01 np0005540825 ceph-mon[74416]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Dec  1 04:53:01 np0005540825 ceph-mon[74416]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Dec  1 04:53:01 np0005540825 podman[105106]: 2025-12-01 09:53:01.897208758 +0000 UTC m=+0.217860730 container create fce9bc3134ca3978698eee75319e5599101ee8ed113eb1f0b8378296df29fd69 (image=quay.io/prometheus/alertmanager:v0.25.0, name=nostalgic_shannon, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:53:02 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec  1 04:53:02 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Dec  1 04:53:02 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Dec  1 04:53:02 np0005540825 systemd[1]: Started libpod-conmon-fce9bc3134ca3978698eee75319e5599101ee8ed113eb1f0b8378296df29fd69.scope.
Dec  1 04:53:02 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:53:02 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c0efa4c9c91f221b3e658cba7a6d0bfb79021da2471315fe7a6d90c6235c71e/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec  1 04:53:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:02 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20003020 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:53:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:53:02.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:53:02 np0005540825 podman[105106]: 2025-12-01 09:53:02.772133956 +0000 UTC m=+1.092785958 container init fce9bc3134ca3978698eee75319e5599101ee8ed113eb1f0b8378296df29fd69 (image=quay.io/prometheus/alertmanager:v0.25.0, name=nostalgic_shannon, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:53:02 np0005540825 podman[105106]: 2025-12-01 09:53:02.781914975 +0000 UTC m=+1.102566907 container start fce9bc3134ca3978698eee75319e5599101ee8ed113eb1f0b8378296df29fd69 (image=quay.io/prometheus/alertmanager:v0.25.0, name=nostalgic_shannon, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:53:02 np0005540825 systemd[1]: libpod-fce9bc3134ca3978698eee75319e5599101ee8ed113eb1f0b8378296df29fd69.scope: Deactivated successfully.
Dec  1 04:53:02 np0005540825 conmon[105122]: conmon fce9bc3134ca3978698e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fce9bc3134ca3978698eee75319e5599101ee8ed113eb1f0b8378296df29fd69.scope/container/memory.events
Dec  1 04:53:02 np0005540825 nostalgic_shannon[105122]: 65534 65534
Dec  1 04:53:02 np0005540825 podman[105106]: 2025-12-01 09:53:02.795365732 +0000 UTC m=+1.116017694 container attach fce9bc3134ca3978698eee75319e5599101ee8ed113eb1f0b8378296df29fd69 (image=quay.io/prometheus/alertmanager:v0.25.0, name=nostalgic_shannon, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:53:02 np0005540825 podman[105106]: 2025-12-01 09:53:02.79604009 +0000 UTC m=+1.116692022 container died fce9bc3134ca3978698eee75319e5599101ee8ed113eb1f0b8378296df29fd69 (image=quay.io/prometheus/alertmanager:v0.25.0, name=nostalgic_shannon, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:53:02 np0005540825 systemd[1]: var-lib-containers-storage-overlay-4c0efa4c9c91f221b3e658cba7a6d0bfb79021da2471315fe7a6d90c6235c71e-merged.mount: Deactivated successfully.
Dec  1 04:53:02 np0005540825 podman[105106]: 2025-12-01 09:53:02.869780076 +0000 UTC m=+1.190432008 container remove fce9bc3134ca3978698eee75319e5599101ee8ed113eb1f0b8378296df29fd69 (image=quay.io/prometheus/alertmanager:v0.25.0, name=nostalgic_shannon, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:53:02 np0005540825 podman[105106]: 2025-12-01 09:53:02.873908826 +0000 UTC m=+1.194560758 volume remove 188cbd19abbbd9a034e3d6bf00a60fe627e461abe806aabdd252cea3fc640bdd
Dec  1 04:53:02 np0005540825 systemd[1]: libpod-conmon-fce9bc3134ca3978698eee75319e5599101ee8ed113eb1f0b8378296df29fd69.scope: Deactivated successfully.
Dec  1 04:53:02 np0005540825 podman[105146]: 2025-12-01 09:53:02.988595988 +0000 UTC m=+0.092792413 volume create 403c997f4c3cb4f3a6ecde5dab2a67f49cbfc7e3b009e32b8ae1df4eda36b59d
Dec  1 04:53:02 np0005540825 podman[105146]: 2025-12-01 09:53:02.999843856 +0000 UTC m=+0.104040261 container create 5e283cfb7cdc6753b8e741ef34d4cd77e5074839103599414a1181d52b47d2a1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=festive_kalam, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:53:03 np0005540825 podman[105146]: 2025-12-01 09:53:02.921728654 +0000 UTC m=+0.025925069 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec  1 04:53:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:03 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad34002eb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:03 np0005540825 systemd[1]: Started libpod-conmon-5e283cfb7cdc6753b8e741ef34d4cd77e5074839103599414a1181d52b47d2a1.scope.
Dec  1 04:53:03 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Dec  1 04:53:03 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Dec  1 04:53:03 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Dec  1 04:53:03 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:53:03 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c829188d38dc89e7352670ad72ebb5c3b2426f461a93e7d53454db5864f4a96/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec  1 04:53:03 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec  1 04:53:03 np0005540825 podman[105146]: 2025-12-01 09:53:03.071185778 +0000 UTC m=+0.175382203 container init 5e283cfb7cdc6753b8e741ef34d4cd77e5074839103599414a1181d52b47d2a1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=festive_kalam, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:53:03 np0005540825 podman[105146]: 2025-12-01 09:53:03.078530473 +0000 UTC m=+0.182726878 container start 5e283cfb7cdc6753b8e741ef34d4cd77e5074839103599414a1181d52b47d2a1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=festive_kalam, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:53:03 np0005540825 festive_kalam[105164]: 65534 65534
Dec  1 04:53:03 np0005540825 systemd[1]: libpod-5e283cfb7cdc6753b8e741ef34d4cd77e5074839103599414a1181d52b47d2a1.scope: Deactivated successfully.
Dec  1 04:53:03 np0005540825 podman[105146]: 2025-12-01 09:53:03.082722554 +0000 UTC m=+0.186919079 container attach 5e283cfb7cdc6753b8e741ef34d4cd77e5074839103599414a1181d52b47d2a1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=festive_kalam, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:53:03 np0005540825 podman[105146]: 2025-12-01 09:53:03.083338571 +0000 UTC m=+0.187534976 container died 5e283cfb7cdc6753b8e741ef34d4cd77e5074839103599414a1181d52b47d2a1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=festive_kalam, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:53:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:53:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:53:03.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:53:03 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v29: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 191 B/s rd, 0 op/s
Dec  1 04:53:03 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Dec  1 04:53:03 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Dec  1 04:53:03 np0005540825 systemd[1]: var-lib-containers-storage-overlay-1c829188d38dc89e7352670ad72ebb5c3b2426f461a93e7d53454db5864f4a96-merged.mount: Deactivated successfully.
Dec  1 04:53:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:03 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:03 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:53:03 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Dec  1 04:53:03 np0005540825 podman[105146]: 2025-12-01 09:53:03.882226121 +0000 UTC m=+0.986422526 container remove 5e283cfb7cdc6753b8e741ef34d4cd77e5074839103599414a1181d52b47d2a1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=festive_kalam, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:53:03 np0005540825 podman[105146]: 2025-12-01 09:53:03.892429482 +0000 UTC m=+0.996625897 volume remove 403c997f4c3cb4f3a6ecde5dab2a67f49cbfc7e3b009e32b8ae1df4eda36b59d
Dec  1 04:53:03 np0005540825 systemd[1]: Stopping Ceph alertmanager.compute-0 for 365f19c2-81e5-5edd-b6b4-280555214d3a...
Dec  1 04:53:03 np0005540825 systemd[1]: libpod-conmon-5e283cfb7cdc6753b8e741ef34d4cd77e5074839103599414a1181d52b47d2a1.scope: Deactivated successfully.
Dec  1 04:53:04 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec  1 04:53:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Dec  1 04:53:04 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Dec  1 04:53:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[97641]: ts=2025-12-01T09:53:04.174Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..."
Dec  1 04:53:04 np0005540825 podman[105223]: 2025-12-01 09:53:04.186470261 +0000 UTC m=+0.066384512 container died 0511cb329529d79a0314faf710797871465300fa18afe5331763ee944339d662 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:53:04 np0005540825 systemd[1]: var-lib-containers-storage-overlay-2d23945a90f26ec1cb71a36d1aaf85f1b4860a553ba9333bae61fe9e515864e6-merged.mount: Deactivated successfully.
Dec  1 04:53:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:04 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14004270 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:04 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Dec  1 04:53:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:53:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:53:04.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:53:04 np0005540825 podman[105223]: 2025-12-01 09:53:04.34289034 +0000 UTC m=+0.222804581 container remove 0511cb329529d79a0314faf710797871465300fa18afe5331763ee944339d662 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:53:04 np0005540825 podman[105223]: 2025-12-01 09:53:04.381972997 +0000 UTC m=+0.261887258 volume remove 7abf4f7c201a9a09023b9e12e8b047ddf4a7274c86ac3597c2d5b0d07c7b6c6d
Dec  1 04:53:04 np0005540825 bash[105223]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0
Dec  1 04:53:04 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@alertmanager.compute-0.service: Deactivated successfully.
Dec  1 04:53:04 np0005540825 systemd[1]: Stopped Ceph alertmanager.compute-0 for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 04:53:04 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@alertmanager.compute-0.service: Consumed 1.171s CPU time.
Dec  1 04:53:04 np0005540825 systemd[1]: Starting Ceph alertmanager.compute-0 for 365f19c2-81e5-5edd-b6b4-280555214d3a...
Dec  1 04:53:04 np0005540825 podman[105335]: 2025-12-01 09:53:04.791494869 +0000 UTC m=+0.026496933 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec  1 04:53:04 np0005540825 podman[105335]: 2025-12-01 09:53:04.964633612 +0000 UTC m=+0.199635676 volume create 36881958c29c4109d589d0640f2b298ba4709feb0f143e1a4dd683e3aba4ec4b
Dec  1 04:53:04 np0005540825 podman[105335]: 2025-12-01 09:53:04.987801976 +0000 UTC m=+0.222804040 container create fa43ac72a8a6a2863fa517cbc53fe118714aa74f1d9b620c1e40de173c893c3c (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:53:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:05 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20003020 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:05 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6ba215d1395449d9a173a47e7ff33ee17b5477aae2f71bf894ea844db8bddc3/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec  1 04:53:05 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6ba215d1395449d9a173a47e7ff33ee17b5477aae2f71bf894ea844db8bddc3/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec  1 04:53:05 np0005540825 podman[105335]: 2025-12-01 09:53:05.099454248 +0000 UTC m=+0.334456302 container init fa43ac72a8a6a2863fa517cbc53fe118714aa74f1d9b620c1e40de173c893c3c (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:53:05 np0005540825 podman[105335]: 2025-12-01 09:53:05.106352071 +0000 UTC m=+0.341354105 container start fa43ac72a8a6a2863fa517cbc53fe118714aa74f1d9b620c1e40de173c893c3c (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:53:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:53:05.139Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Dec  1 04:53:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:53:05.139Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Dec  1 04:53:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:53:05.149Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Dec  1 04:53:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:53:05.151Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Dec  1 04:53:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Dec  1 04:53:05 np0005540825 bash[105335]: fa43ac72a8a6a2863fa517cbc53fe118714aa74f1d9b620c1e40de173c893c3c
Dec  1 04:53:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:53:05.200Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Dec  1 04:53:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:53:05.201Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Dec  1 04:53:05 np0005540825 systemd[1]: Started Ceph alertmanager.compute-0 for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 04:53:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:53:05.207Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Dec  1 04:53:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:53:05.207Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Dec  1 04:53:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 04:53:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:53:05.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 04:53:05 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v31: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  1 04:53:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Dec  1 04:53:05 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec  1 04:53:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:53:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Dec  1 04:53:05 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Dec  1 04:53:05 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec  1 04:53:05 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec  1 04:53:05 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:53:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:05 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20003020 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:05 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:05 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Reconfiguring grafana.compute-0 (dependencies changed)...
Dec  1 04:53:05 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Reconfiguring grafana.compute-0 (dependencies changed)...
Dec  1 04:53:06 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Reconfiguring daemon grafana.compute-0 on compute-0
Dec  1 04:53:06 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Reconfiguring daemon grafana.compute-0 on compute-0
Dec  1 04:53:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:06 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 04:53:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:53:06.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 04:53:06 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Dec  1 04:53:06 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec  1 04:53:06 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Dec  1 04:53:06 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Dec  1 04:53:06 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:06 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:06 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec  1 04:53:06 np0005540825 podman[105452]: 2025-12-01 09:53:06.637467635 +0000 UTC m=+0.028717653 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec  1 04:53:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:07 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14004290 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:53:07.152Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000494824s
Dec  1 04:53:07 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:07 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:53:07 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:53:07.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:53:07 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v34: 353 pgs: 1 remapped+peering, 1 activating, 351 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 487 B/s rd, 0 op/s; 26 B/s, 0 objects/s recovering
Dec  1 04:53:07 np0005540825 podman[105452]: 2025-12-01 09:53:07.262698169 +0000 UTC m=+0.653948157 container create 94ec8a283be6d1b09992f1fd65b280de8a55b147a790c725a678b688c72e6843 (image=quay.io/ceph/grafana:10.4.0, name=elastic_mirzakhani, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 04:53:07 np0005540825 systemd[1]: Started libpod-conmon-94ec8a283be6d1b09992f1fd65b280de8a55b147a790c725a678b688c72e6843.scope.
Dec  1 04:53:07 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:53:07 np0005540825 podman[105452]: 2025-12-01 09:53:07.408278251 +0000 UTC m=+0.799528329 container init 94ec8a283be6d1b09992f1fd65b280de8a55b147a790c725a678b688c72e6843 (image=quay.io/ceph/grafana:10.4.0, name=elastic_mirzakhani, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 04:53:07 np0005540825 podman[105452]: 2025-12-01 09:53:07.419752555 +0000 UTC m=+0.811002573 container start 94ec8a283be6d1b09992f1fd65b280de8a55b147a790c725a678b688c72e6843 (image=quay.io/ceph/grafana:10.4.0, name=elastic_mirzakhani, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 04:53:07 np0005540825 podman[105452]: 2025-12-01 09:53:07.424560323 +0000 UTC m=+0.815810341 container attach 94ec8a283be6d1b09992f1fd65b280de8a55b147a790c725a678b688c72e6843 (image=quay.io/ceph/grafana:10.4.0, name=elastic_mirzakhani, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 04:53:07 np0005540825 elastic_mirzakhani[105470]: 472 0
Dec  1 04:53:07 np0005540825 systemd[1]: libpod-94ec8a283be6d1b09992f1fd65b280de8a55b147a790c725a678b688c72e6843.scope: Deactivated successfully.
Dec  1 04:53:07 np0005540825 podman[105452]: 2025-12-01 09:53:07.427618044 +0000 UTC m=+0.818868052 container died 94ec8a283be6d1b09992f1fd65b280de8a55b147a790c725a678b688c72e6843 (image=quay.io/ceph/grafana:10.4.0, name=elastic_mirzakhani, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 04:53:07 np0005540825 systemd[1]: var-lib-containers-storage-overlay-cd8e2e2d264c609912968a8f66bd0ad6ab999bb0e32129d5aeb5836e818b1282-merged.mount: Deactivated successfully.
Dec  1 04:53:07 np0005540825 podman[105452]: 2025-12-01 09:53:07.470635965 +0000 UTC m=+0.861885953 container remove 94ec8a283be6d1b09992f1fd65b280de8a55b147a790c725a678b688c72e6843 (image=quay.io/ceph/grafana:10.4.0, name=elastic_mirzakhani, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 04:53:07 np0005540825 systemd[1]: libpod-conmon-94ec8a283be6d1b09992f1fd65b280de8a55b147a790c725a678b688c72e6843.scope: Deactivated successfully.
Dec  1 04:53:07 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Dec  1 04:53:07 np0005540825 podman[105487]: 2025-12-01 09:53:07.54511232 +0000 UTC m=+0.051808935 container create 3163e3dd6e0ce81855726f3739e5c4764756ade3ad73ab7d2dc736cfb4347885 (image=quay.io/ceph/grafana:10.4.0, name=kind_bouman, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 04:53:07 np0005540825 podman[105487]: 2025-12-01 09:53:07.516784149 +0000 UTC m=+0.023480784 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec  1 04:53:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:07 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14004290 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:07 np0005540825 ceph-mon[74416]: Reconfiguring grafana.compute-0 (dependencies changed)...
Dec  1 04:53:07 np0005540825 ceph-mon[74416]: Reconfiguring daemon grafana.compute-0 on compute-0
Dec  1 04:53:07 np0005540825 systemd[1]: Started libpod-conmon-3163e3dd6e0ce81855726f3739e5c4764756ade3ad73ab7d2dc736cfb4347885.scope.
Dec  1 04:53:07 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:53:07 np0005540825 podman[105487]: 2025-12-01 09:53:07.806872373 +0000 UTC m=+0.313569008 container init 3163e3dd6e0ce81855726f3739e5c4764756ade3ad73ab7d2dc736cfb4347885 (image=quay.io/ceph/grafana:10.4.0, name=kind_bouman, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 04:53:07 np0005540825 podman[105487]: 2025-12-01 09:53:07.812357158 +0000 UTC m=+0.319053783 container start 3163e3dd6e0ce81855726f3739e5c4764756ade3ad73ab7d2dc736cfb4347885 (image=quay.io/ceph/grafana:10.4.0, name=kind_bouman, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 04:53:07 np0005540825 kind_bouman[105506]: 472 0
Dec  1 04:53:07 np0005540825 podman[105487]: 2025-12-01 09:53:07.815627965 +0000 UTC m=+0.322324570 container attach 3163e3dd6e0ce81855726f3739e5c4764756ade3ad73ab7d2dc736cfb4347885 (image=quay.io/ceph/grafana:10.4.0, name=kind_bouman, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 04:53:07 np0005540825 systemd[1]: libpod-3163e3dd6e0ce81855726f3739e5c4764756ade3ad73ab7d2dc736cfb4347885.scope: Deactivated successfully.
Dec  1 04:53:07 np0005540825 podman[105487]: 2025-12-01 09:53:07.816125698 +0000 UTC m=+0.322822303 container died 3163e3dd6e0ce81855726f3739e5c4764756ade3ad73ab7d2dc736cfb4347885 (image=quay.io/ceph/grafana:10.4.0, name=kind_bouman, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 04:53:07 np0005540825 systemd[1]: var-lib-containers-storage-overlay-8eacf8c92eec88367160f69aba57e9b8359fe83a8137f75c970a6988f4f39cee-merged.mount: Deactivated successfully.
Dec  1 04:53:07 np0005540825 podman[105487]: 2025-12-01 09:53:07.855584825 +0000 UTC m=+0.362281440 container remove 3163e3dd6e0ce81855726f3739e5c4764756ade3ad73ab7d2dc736cfb4347885 (image=quay.io/ceph/grafana:10.4.0, name=kind_bouman, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 04:53:07 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Dec  1 04:53:07 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Dec  1 04:53:07 np0005540825 systemd[1]: libpod-conmon-3163e3dd6e0ce81855726f3739e5c4764756ade3ad73ab7d2dc736cfb4347885.scope: Deactivated successfully.
Dec  1 04:53:07 np0005540825 systemd[1]: Stopping Ceph grafana.compute-0 for 365f19c2-81e5-5edd-b6b4-280555214d3a...
Dec  1 04:53:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=server t=2025-12-01T09:53:08.196733274Z level=info msg="Shutdown started" reason="System signal: terminated"
Dec  1 04:53:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=tracing t=2025-12-01T09:53:08.19696187Z level=info msg="Closing tracing"
Dec  1 04:53:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=grafana-apiserver t=2025-12-01T09:53:08.197645048Z level=info msg="StorageObjectCountTracker pruner is exiting"
Dec  1 04:53:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=ticker t=2025-12-01T09:53:08.197657668Z level=info msg=stopped last_tick=2025-12-01T09:53:00Z
Dec  1 04:53:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[98188]: logger=sqlstore.transactions t=2025-12-01T09:53:08.209679747Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Dec  1 04:53:08 np0005540825 podman[105557]: 2025-12-01 09:53:08.237466834 +0000 UTC m=+0.131832998 container died 6eb1185f94a74a666c6b5c09efc32bc1424dea31547c65157a432674ce35a678 (image=quay.io/ceph/grafana:10.4.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 04:53:08 np0005540825 systemd[1]: var-lib-containers-storage-overlay-9d31f7d48e723406ab9ed22fb0dfbe5a1b660448d71807f5af04abe282adb7e2-merged.mount: Deactivated successfully.
Dec  1 04:53:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:08 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20003d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 04:53:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:53:08.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 04:53:08 np0005540825 podman[105557]: 2025-12-01 09:53:08.332119465 +0000 UTC m=+0.226485659 container remove 6eb1185f94a74a666c6b5c09efc32bc1424dea31547c65157a432674ce35a678 (image=quay.io/ceph/grafana:10.4.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 04:53:08 np0005540825 bash[105557]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0
Dec  1 04:53:08 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@grafana.compute-0.service: Deactivated successfully.
Dec  1 04:53:08 np0005540825 systemd[1]: Stopped Ceph grafana.compute-0 for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 04:53:08 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@grafana.compute-0.service: Consumed 4.435s CPU time.
Dec  1 04:53:08 np0005540825 systemd[1]: Starting Ceph grafana.compute-0 for 365f19c2-81e5-5edd-b6b4-280555214d3a...
Dec  1 04:53:08 np0005540825 podman[105662]: 2025-12-01 09:53:08.665125298 +0000 UTC m=+0.049045562 container create 2e1200771a4f85a610f0f173c3c2000346e63d85e37d815d1d1db9886b52c917 (image=quay.io/ceph/grafana:10.4.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 04:53:08 np0005540825 podman[105662]: 2025-12-01 09:53:08.639134239 +0000 UTC m=+0.023054553 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec  1 04:53:08 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6a9540f9e136eb9beec5c357457b4f9a6ffe33c38053135936baf5227f74d8b/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Dec  1 04:53:08 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6a9540f9e136eb9beec5c357457b4f9a6ffe33c38053135936baf5227f74d8b/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Dec  1 04:53:08 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6a9540f9e136eb9beec5c357457b4f9a6ffe33c38053135936baf5227f74d8b/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Dec  1 04:53:08 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6a9540f9e136eb9beec5c357457b4f9a6ffe33c38053135936baf5227f74d8b/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Dec  1 04:53:08 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6a9540f9e136eb9beec5c357457b4f9a6ffe33c38053135936baf5227f74d8b/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Dec  1 04:53:08 np0005540825 podman[105662]: 2025-12-01 09:53:08.786800106 +0000 UTC m=+0.170720390 container init 2e1200771a4f85a610f0f173c3c2000346e63d85e37d815d1d1db9886b52c917 (image=quay.io/ceph/grafana:10.4.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 04:53:08 np0005540825 podman[105662]: 2025-12-01 09:53:08.792814115 +0000 UTC m=+0.176734379 container start 2e1200771a4f85a610f0f173c3c2000346e63d85e37d815d1d1db9886b52c917 (image=quay.io/ceph/grafana:10.4.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 04:53:08 np0005540825 bash[105662]: 2e1200771a4f85a610f0f173c3c2000346e63d85e37d815d1d1db9886b52c917
Dec  1 04:53:08 np0005540825 systemd[1]: Started Ceph grafana.compute-0 for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 04:53:08 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Dec  1 04:53:08 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:53:08 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Dec  1 04:53:08 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Dec  1 04:53:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=settings t=2025-12-01T09:53:08.997493274Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2025-12-01T09:53:08Z
Dec  1 04:53:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=settings t=2025-12-01T09:53:08.997769562Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Dec  1 04:53:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=settings t=2025-12-01T09:53:08.997783692Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Dec  1 04:53:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=settings t=2025-12-01T09:53:08.997788242Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Dec  1 04:53:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=settings t=2025-12-01T09:53:08.997792362Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Dec  1 04:53:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=settings t=2025-12-01T09:53:08.997796032Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Dec  1 04:53:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=settings t=2025-12-01T09:53:08.997800362Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Dec  1 04:53:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=settings t=2025-12-01T09:53:08.997804572Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Dec  1 04:53:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=settings t=2025-12-01T09:53:08.997808903Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Dec  1 04:53:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=settings t=2025-12-01T09:53:08.997812583Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Dec  1 04:53:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=settings t=2025-12-01T09:53:08.997817183Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Dec  1 04:53:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=settings t=2025-12-01T09:53:08.997820673Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Dec  1 04:53:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=settings t=2025-12-01T09:53:08.997824923Z level=info msg=Target target=[all]
Dec  1 04:53:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=settings t=2025-12-01T09:53:08.997838623Z level=info msg="Path Home" path=/usr/share/grafana
Dec  1 04:53:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=settings t=2025-12-01T09:53:08.997842333Z level=info msg="Path Data" path=/var/lib/grafana
Dec  1 04:53:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=settings t=2025-12-01T09:53:08.997846654Z level=info msg="Path Logs" path=/var/log/grafana
Dec  1 04:53:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=settings t=2025-12-01T09:53:08.997851294Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Dec  1 04:53:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=settings t=2025-12-01T09:53:08.997855714Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Dec  1 04:53:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=settings t=2025-12-01T09:53:08.997859454Z level=info msg="App mode production"
Dec  1 04:53:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=sqlstore t=2025-12-01T09:53:08.998161162Z level=info msg="Connecting to DB" dbtype=sqlite3
Dec  1 04:53:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=sqlstore t=2025-12-01T09:53:08.998176452Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Dec  1 04:53:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=migrator t=2025-12-01T09:53:08.998860871Z level=info msg="Starting DB migrations"
Dec  1 04:53:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=migrator t=2025-12-01T09:53:09.015519672Z level=info msg="migrations completed" performed=0 skipped=547 duration=553.294µs
Dec  1 04:53:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=sqlstore t=2025-12-01T09:53:09.016448857Z level=info msg="Created default organization"
Dec  1 04:53:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=secrets t=2025-12-01T09:53:09.016862748Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Dec  1 04:53:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:53:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:09 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=plugin.store t=2025-12-01T09:53:09.040184367Z level=info msg="Loading plugins..."
Dec  1 04:53:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:09 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-1 (monmap changed)...
Dec  1 04:53:09 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-1 (monmap changed)...
Dec  1 04:53:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec  1 04:53:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  1 04:53:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:53:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:53:09 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-1 on compute-1
Dec  1 04:53:09 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-1 on compute-1
Dec  1 04:53:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=local.finder t=2025-12-01T09:53:09.11798666Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Dec  1 04:53:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=plugin.store t=2025-12-01T09:53:09.118017571Z level=info msg="Plugins loaded" count=55 duration=77.835224ms
Dec  1 04:53:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=query_data t=2025-12-01T09:53:09.120693302Z level=info msg="Query Service initialization"
Dec  1 04:53:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=live.push_http t=2025-12-01T09:53:09.123572839Z level=info msg="Live Push Gateway initialization"
Dec  1 04:53:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=ngalert.migration t=2025-12-01T09:53:09.161279909Z level=info msg=Starting
Dec  1 04:53:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=ngalert.state.manager t=2025-12-01T09:53:09.176144453Z level=info msg="Running in alternative execution of Error/NoData mode"
Dec  1 04:53:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=infra.usagestats.collector t=2025-12-01T09:53:09.17790793Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Dec  1 04:53:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=provisioning.datasources t=2025-12-01T09:53:09.180462388Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596
Dec  1 04:53:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=provisioning.alerting t=2025-12-01T09:53:09.202950954Z level=info msg="starting to provision alerting"
Dec  1 04:53:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=provisioning.alerting t=2025-12-01T09:53:09.202978085Z level=info msg="finished to provision alerting"
Dec  1 04:53:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=ngalert.state.manager t=2025-12-01T09:53:09.203828197Z level=info msg="Warming state cache for startup"
Dec  1 04:53:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=ngalert.multiorg.alertmanager t=2025-12-01T09:53:09.206054096Z level=info msg="Starting MultiOrg Alertmanager"
Dec  1 04:53:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=http.server t=2025-12-01T09:53:09.210756381Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Dec  1 04:53:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=http.server t=2025-12-01T09:53:09.21110095Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Dec  1 04:53:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=grafanaStorageLogger t=2025-12-01T09:53:09.216812272Z level=info msg="Storage starting"
Dec  1 04:53:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:53:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:53:09.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:53:09 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v37: 353 pgs: 1 remapped+peering, 1 activating, 351 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 27 B/s, 0 objects/s recovering
Dec  1 04:53:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=plugins.update.checker t=2025-12-01T09:53:09.271163483Z level=info msg="Update check succeeded" duration=67.732676ms
Dec  1 04:53:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=grafana.update.checker t=2025-12-01T09:53:09.278450067Z level=info msg="Update check succeeded" duration=72.540205ms
Dec  1 04:53:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=provisioning.dashboard t=2025-12-01T09:53:09.355539372Z level=info msg="starting to provision dashboards"
Dec  1 04:53:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=ngalert.state.manager t=2025-12-01T09:53:09.356836776Z level=info msg="State cache has been initialized" states=0 duration=153.001639ms
Dec  1 04:53:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=ngalert.scheduler t=2025-12-01T09:53:09.356893487Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Dec  1 04:53:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=ticker t=2025-12-01T09:53:09.35699688Z level=info msg=starting first_tick=2025-12-01T09:53:10Z
Dec  1 04:53:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=provisioning.dashboard t=2025-12-01T09:53:09.374143225Z level=info msg="finished to provision dashboards"
Dec  1 04:53:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 04:53:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 04:53:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:53:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:53:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:53:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:53:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:53:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:53:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=grafana-apiserver t=2025-12-01T09:53:09.614618344Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Dec  1 04:53:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=grafana-apiserver t=2025-12-01T09:53:09.615094686Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Dec  1 04:53:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:09 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  1 04:53:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  1 04:53:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:09 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Dec  1 04:53:09 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Dec  1 04:53:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Dec  1 04:53:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec  1 04:53:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:53:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:53:09 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on compute-1
Dec  1 04:53:09 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on compute-1
Dec  1 04:53:10 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:10 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:10 np0005540825 ceph-mon[74416]: Reconfiguring crash.compute-1 (monmap changed)...
Dec  1 04:53:10 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  1 04:53:10 np0005540825 ceph-mon[74416]: Reconfiguring daemon crash.compute-1 on compute-1
Dec  1 04:53:10 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:10 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:10 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec  1 04:53:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:10 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:53:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:53:10.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:53:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:11 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20003d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:53:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:53:11.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:53:11 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v38: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 523 B/s rd, 0 op/s; 37 B/s, 2 objects/s recovering
Dec  1 04:53:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Dec  1 04:53:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Dec  1 04:53:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Dec  1 04:53:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  1 04:53:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:53:11] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Dec  1 04:53:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:53:11] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Dec  1 04:53:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec  1 04:53:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Dec  1 04:53:11 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Dec  1 04:53:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:11 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad04000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:11 np0005540825 ceph-mon[74416]: Reconfiguring osd.0 (monmap changed)...
Dec  1 04:53:11 np0005540825 ceph-mon[74416]: Reconfiguring daemon osd.0 on compute-1
Dec  1 04:53:11 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Dec  1 04:53:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  1 04:53:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:11 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Dec  1 04:53:11 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Dec  1 04:53:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec  1 04:53:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  1 04:53:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec  1 04:53:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec  1 04:53:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:53:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:53:11 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Dec  1 04:53:11 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Dec  1 04:53:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:12 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:53:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:53:12.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:53:12 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  1 04:53:12 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:12 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  1 04:53:12 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:12 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-2 (monmap changed)...
Dec  1 04:53:12 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-2 (monmap changed)...
Dec  1 04:53:12 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec  1 04:53:12 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  1 04:53:12 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec  1 04:53:12 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec  1 04:53:12 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:53:12 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:53:12 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-2 on compute-2
Dec  1 04:53:12 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-2 on compute-2
Dec  1 04:53:12 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec  1 04:53:12 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:12 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:12 np0005540825 ceph-mon[74416]: Reconfiguring mon.compute-1 (monmap changed)...
Dec  1 04:53:12 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  1 04:53:12 np0005540825 ceph-mon[74416]: Reconfiguring daemon mon.compute-1 on compute-1
Dec  1 04:53:12 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:12 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:12 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  1 04:53:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:13 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:53:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:53:13.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:53:13 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v40: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s; 18 B/s, 1 objects/s recovering
Dec  1 04:53:13 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Dec  1 04:53:13 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Dec  1 04:53:13 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  1 04:53:13 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:13 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  1 04:53:13 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:13 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-2.kdtkls (monmap changed)...
Dec  1 04:53:13 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-2.kdtkls (monmap changed)...
Dec  1 04:53:13 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.kdtkls", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec  1 04:53:13 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.kdtkls", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  1 04:53:13 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  1 04:53:13 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  1 04:53:13 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:53:13 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:53:13 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-2.kdtkls on compute-2
Dec  1 04:53:13 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-2.kdtkls on compute-2
Dec  1 04:53:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:13 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20003d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:13 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Dec  1 04:53:13 np0005540825 ceph-mon[74416]: Reconfiguring mon.compute-2 (monmap changed)...
Dec  1 04:53:13 np0005540825 ceph-mon[74416]: Reconfiguring daemon mon.compute-2 on compute-2
Dec  1 04:53:13 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Dec  1 04:53:13 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:13 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:13 np0005540825 ceph-mon[74416]: Reconfiguring mgr.compute-2.kdtkls (monmap changed)...
Dec  1 04:53:13 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.kdtkls", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  1 04:53:13 np0005540825 ceph-mon[74416]: Reconfiguring daemon mgr.compute-2.kdtkls on compute-2
Dec  1 04:53:13 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec  1 04:53:13 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Dec  1 04:53:13 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Dec  1 04:53:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  1 04:53:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  1 04:53:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:14 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Reconfiguring osd.2 (unknown last config time)...
Dec  1 04:53:14 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Reconfiguring osd.2 (unknown last config time)...
Dec  1 04:53:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Dec  1 04:53:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec  1 04:53:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:53:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:53:14 np0005540825 ceph-mgr[74709]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on compute-2
Dec  1 04:53:14 np0005540825 ceph-mgr[74709]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on compute-2
Dec  1 04:53:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:14 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad040016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:53:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:53:14.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:53:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  1 04:53:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  1 04:53:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-alertmanager-api-host"} v 0)
Dec  1 04:53:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Dec  1 04:53:14 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Dec  1 04:53:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-grafana-api-url"} v 0)
Dec  1 04:53:15 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Dec  1 04:53:15 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Dec  1 04:53:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:15 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"} v 0)
Dec  1 04:53:15 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Dec  1 04:53:15 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Dec  1 04:53:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Dec  1 04:53:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:53:15.156Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.004266965s
Dec  1 04:53:15 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec  1 04:53:15 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:15 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:15 np0005540825 ceph-mon[74416]: Reconfiguring osd.2 (unknown last config time)...
Dec  1 04:53:15 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec  1 04:53:15 np0005540825 ceph-mon[74416]: Reconfiguring daemon osd.2 on compute-2
Dec  1 04:53:15 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:15 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:15 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:15 np0005540825 ceph-mgr[74709]: [prometheus INFO root] Restarting engine...
Dec  1 04:53:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: [01/Dec/2025:09:53:15] ENGINE Bus STOPPING
Dec  1 04:53:15 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.error] [01/Dec/2025:09:53:15] ENGINE Bus STOPPING
Dec  1 04:53:15 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v42: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 162 B/s rd, 0 op/s; 17 B/s, 1 objects/s recovering
Dec  1 04:53:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Dec  1 04:53:15 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Dec  1 04:53:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:53:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:53:15.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:53:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: [01/Dec/2025:09:53:15] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Dec  1 04:53:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: [01/Dec/2025:09:53:15] ENGINE Bus STOPPED
Dec  1 04:53:15 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.error] [01/Dec/2025:09:53:15] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Dec  1 04:53:15 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.error] [01/Dec/2025:09:53:15] ENGINE Bus STOPPED
Dec  1 04:53:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: [01/Dec/2025:09:53:15] ENGINE Bus STARTING
Dec  1 04:53:15 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.error] [01/Dec/2025:09:53:15] ENGINE Bus STARTING
Dec  1 04:53:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: [01/Dec/2025:09:53:15] ENGINE Serving on http://:::9283
Dec  1 04:53:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: [01/Dec/2025:09:53:15] ENGINE Bus STARTED
Dec  1 04:53:15 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.error] [01/Dec/2025:09:53:15] ENGINE Serving on http://:::9283
Dec  1 04:53:15 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.error] [01/Dec/2025:09:53:15] ENGINE Bus STARTED
Dec  1 04:53:15 np0005540825 ceph-mgr[74709]: [prometheus INFO root] Engine started.
Dec  1 04:53:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:15 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:16 np0005540825 podman[105850]: 2025-12-01 09:53:16.013179636 +0000 UTC m=+0.074335082 container exec 04e54403a63b389bbbec1024baf07fe083822adf8debfd9c927a52a06f70b8a1 (image=quay.io/ceph/ceph:v19, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  1 04:53:16 np0005540825 podman[105850]: 2025-12-01 09:53:16.130733525 +0000 UTC m=+0.191888951 container exec_died 04e54403a63b389bbbec1024baf07fe083822adf8debfd9c927a52a06f70b8a1 (image=quay.io/ceph/ceph:v19, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:53:16 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Dec  1 04:53:16 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Dec  1 04:53:16 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:16 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Dec  1 04:53:16 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec  1 04:53:16 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Dec  1 04:53:16 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Dec  1 04:53:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:16 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20003d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:53:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:53:16.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:53:16 np0005540825 podman[105996]: 2025-12-01 09:53:16.793095004 +0000 UTC m=+0.234722777 container exec 6f6cf01cf4add71c311676e9908aca30b90b94b7eb4eed46b57a6078721d520f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:53:16 np0005540825 podman[105996]: 2025-12-01 09:53:16.803709476 +0000 UTC m=+0.245337249 container exec_died 6f6cf01cf4add71c311676e9908aca30b90b94b7eb4eed46b57a6078721d520f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:53:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:17 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad040016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:17 np0005540825 ceph-mgr[74709]: [dashboard INFO request] [192.168.122.100:37090] [POST] [200] [0.133s] [4.0B] [c2c4b6fb-129d-4641-8223-c7c79d79059f] /api/prometheus_receiver
Dec  1 04:53:17 np0005540825 podman[106092]: 2025-12-01 09:53:17.247964229 +0000 UTC m=+0.089078033 container exec 385d0b8a0770a5cfcc609cc2d998a61d24533494ce0bce025dda1e75042f6acf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  1 04:53:17 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v44: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  1 04:53:17 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Dec  1 04:53:17 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Dec  1 04:53:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 04:53:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:53:17.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 04:53:17 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Dec  1 04:53:17 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec  1 04:53:17 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Dec  1 04:53:17 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Dec  1 04:53:17 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 121 pg[10.19( v 56'1015 (0'0,56'1015] local-lis/les=89/90 n=7 ec=61/50 lis/c=89/89 les/c/f=90/90/0 sis=121 pruub=15.253195763s) [0] r=-1 lpr=121 pi=[89,121)/1 crt=56'1015 mlcod 0'0 active pruub 304.979370117s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:53:17 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 121 pg[10.19( v 56'1015 (0'0,56'1015] local-lis/les=89/90 n=7 ec=61/50 lis/c=89/89 les/c/f=90/90/0 sis=121 pruub=15.253152847s) [0] r=-1 lpr=121 pi=[89,121)/1 crt=56'1015 mlcod 0'0 unknown NOTIFY pruub 304.979370117s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:53:17 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec  1 04:53:17 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Dec  1 04:53:17 np0005540825 podman[106092]: 2025-12-01 09:53:17.292380688 +0000 UTC m=+0.133494432 container exec_died 385d0b8a0770a5cfcc609cc2d998a61d24533494ce0bce025dda1e75042f6acf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec  1 04:53:17 np0005540825 podman[106159]: 2025-12-01 09:53:17.537230282 +0000 UTC m=+0.063135015 container exec 0ce6b28b78cdc773acbae8987038033199adf9f2d08be5b101f663b41bdbf569 (image=quay.io/ceph/haproxy:2.3, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd)
Dec  1 04:53:17 np0005540825 podman[106159]: 2025-12-01 09:53:17.574132691 +0000 UTC m=+0.100037424 container exec_died 0ce6b28b78cdc773acbae8987038033199adf9f2d08be5b101f663b41bdbf569 (image=quay.io/ceph/haproxy:2.3, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd)
Dec  1 04:53:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:17 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:17 np0005540825 podman[106226]: 2025-12-01 09:53:17.836576373 +0000 UTC m=+0.072804522 container exec a5bc912f6140365e8fac95a046d1f1cd854ca55aaf2d1e10454f7fa95d0346ac (image=quay.io/ceph/keepalived:2.2.4, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-nfs-cephfs-compute-0-gzwexr, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, version=2.2.4, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, release=1793, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc.)
Dec  1 04:53:17 np0005540825 podman[106226]: 2025-12-01 09:53:17.850636646 +0000 UTC m=+0.086864775 container exec_died a5bc912f6140365e8fac95a046d1f1cd854ca55aaf2d1e10454f7fa95d0346ac (image=quay.io/ceph/keepalived:2.2.4, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-nfs-cephfs-compute-0-gzwexr, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, description=keepalived for Ceph, version=2.2.4, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, name=keepalived)
Dec  1 04:53:18 np0005540825 podman[106291]: 2025-12-01 09:53:18.097523154 +0000 UTC m=+0.070231964 container exec fa43ac72a8a6a2863fa517cbc53fe118714aa74f1d9b620c1e40de173c893c3c (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:53:18 np0005540825 podman[106291]: 2025-12-01 09:53:18.124630953 +0000 UTC m=+0.097339733 container exec_died fa43ac72a8a6a2863fa517cbc53fe118714aa74f1d9b620c1e40de173c893c3c (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:53:18 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Dec  1 04:53:18 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec  1 04:53:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:18 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:18 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Dec  1 04:53:18 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Dec  1 04:53:18 np0005540825 podman[106366]: 2025-12-01 09:53:18.319423209 +0000 UTC m=+0.048413364 container exec 2e1200771a4f85a610f0f173c3c2000346e63d85e37d815d1d1db9886b52c917 (image=quay.io/ceph/grafana:10.4.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 04:53:18 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 122 pg[10.19( v 56'1015 (0'0,56'1015] local-lis/les=89/90 n=7 ec=61/50 lis/c=89/89 les/c/f=90/90/0 sis=122) [0]/[1] r=0 lpr=122 pi=[89,122)/1 crt=56'1015 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:53:18 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 122 pg[10.19( v 56'1015 (0'0,56'1015] local-lis/les=89/90 n=7 ec=61/50 lis/c=89/89 les/c/f=90/90/0 sis=122) [0]/[1] r=0 lpr=122 pi=[89,122)/1 crt=56'1015 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  1 04:53:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:53:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:53:18.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:53:18 np0005540825 podman[106366]: 2025-12-01 09:53:18.472548601 +0000 UTC m=+0.201538776 container exec_died 2e1200771a4f85a610f0f173c3c2000346e63d85e37d815d1d1db9886b52c917 (image=quay.io/ceph/grafana:10.4.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 04:53:18 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  1 04:53:18 np0005540825 podman[106487]: 2025-12-01 09:53:18.970637863 +0000 UTC m=+0.083056244 container exec f4d1dfb280c04c299aa8be4743fa19bf2fe3a6e302067b3bdeba477b91d1a552 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:53:19 np0005540825 podman[106487]: 2025-12-01 09:53:19.028777415 +0000 UTC m=+0.141195766 container exec_died f4d1dfb280c04c299aa8be4743fa19bf2fe3a6e302067b3bdeba477b91d1a552 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:53:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:19 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad040016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:53:19 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v47: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 385 B/s rd, 0 op/s
Dec  1 04:53:19 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Dec  1 04:53:19 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Dec  1 04:53:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:53:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:53:19.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:53:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:53:19 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:53:19 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:53:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 04:53:19 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 04:53:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 04:53:19 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Dec  1 04:53:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 04:53:19 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:19 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Dec  1 04:53:19 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:19 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 04:53:19 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:19 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec  1 04:53:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Dec  1 04:53:19 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:19 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Dec  1 04:53:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 04:53:19 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 04:53:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 04:53:19 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 04:53:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:53:19 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:53:19 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 123 pg[10.19( v 56'1015 (0'0,56'1015] local-lis/les=122/123 n=7 ec=61/50 lis/c=89/89 les/c/f=90/90/0 sis=122) [0]/[1] async=[0] r=0 lpr=122 pi=[89,122)/1 crt=56'1015 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:53:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:19 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20003d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:19 np0005540825 podman[106641]: 2025-12-01 09:53:19.935960558 +0000 UTC m=+0.033556041 container create 0835e303921b586ff0f69abddddded7ce6395776c21924c113fc59b6385ea5ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_kalam, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  1 04:53:19 np0005540825 systemd[1]: Started libpod-conmon-0835e303921b586ff0f69abddddded7ce6395776c21924c113fc59b6385ea5ba.scope.
Dec  1 04:53:20 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:53:20 np0005540825 podman[106641]: 2025-12-01 09:53:19.920358615 +0000 UTC m=+0.017954108 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:53:20 np0005540825 podman[106641]: 2025-12-01 09:53:20.034261386 +0000 UTC m=+0.131856889 container init 0835e303921b586ff0f69abddddded7ce6395776c21924c113fc59b6385ea5ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_kalam, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:53:20 np0005540825 podman[106641]: 2025-12-01 09:53:20.042265998 +0000 UTC m=+0.139861471 container start 0835e303921b586ff0f69abddddded7ce6395776c21924c113fc59b6385ea5ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_kalam, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:53:20 np0005540825 xenodochial_kalam[106657]: 167 167
Dec  1 04:53:20 np0005540825 systemd[1]: libpod-0835e303921b586ff0f69abddddded7ce6395776c21924c113fc59b6385ea5ba.scope: Deactivated successfully.
Dec  1 04:53:20 np0005540825 podman[106641]: 2025-12-01 09:53:20.048036851 +0000 UTC m=+0.145632374 container attach 0835e303921b586ff0f69abddddded7ce6395776c21924c113fc59b6385ea5ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_kalam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec  1 04:53:20 np0005540825 podman[106641]: 2025-12-01 09:53:20.048431152 +0000 UTC m=+0.146026635 container died 0835e303921b586ff0f69abddddded7ce6395776c21924c113fc59b6385ea5ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  1 04:53:20 np0005540825 systemd[1]: var-lib-containers-storage-overlay-3c6688f955d0b8293aeae4e4aea168175d5f150e73047325343bac5e372ec8f0-merged.mount: Deactivated successfully.
Dec  1 04:53:20 np0005540825 podman[106641]: 2025-12-01 09:53:20.096513617 +0000 UTC m=+0.194109090 container remove 0835e303921b586ff0f69abddddded7ce6395776c21924c113fc59b6385ea5ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_kalam, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  1 04:53:20 np0005540825 systemd[1]: libpod-conmon-0835e303921b586ff0f69abddddded7ce6395776c21924c113fc59b6385ea5ba.scope: Deactivated successfully.
Dec  1 04:53:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:20 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:53:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:53:20.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:53:20 np0005540825 podman[106689]: 2025-12-01 09:53:20.285752627 +0000 UTC m=+0.022896078 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:53:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Dec  1 04:53:20 np0005540825 podman[106689]: 2025-12-01 09:53:20.852813799 +0000 UTC m=+0.589957270 container create 42cc258d5f4631beaa20fc63576d4e2488628af077639144ad0175908d57c0bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_blackwell, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec  1 04:53:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Dec  1 04:53:20 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec  1 04:53:20 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:20 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 04:53:20 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 124 pg[10.19( v 56'1015 (0'0,56'1015] local-lis/les=122/123 n=7 ec=61/50 lis/c=122/89 les/c/f=123/90/0 sis=124 pruub=14.455493927s) [0] async=[0] r=-1 lpr=124 pi=[89,124)/1 crt=56'1015 mlcod 56'1015 active pruub 307.791229248s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:53:20 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 124 pg[10.19( v 56'1015 (0'0,56'1015] local-lis/les=122/123 n=7 ec=61/50 lis/c=122/89 les/c/f=123/90/0 sis=124 pruub=14.454746246s) [0] r=-1 lpr=124 pi=[89,124)/1 crt=56'1015 mlcod 0'0 unknown NOTIFY pruub 307.791229248s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:53:20 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Dec  1 04:53:20 np0005540825 systemd[1]: Started libpod-conmon-42cc258d5f4631beaa20fc63576d4e2488628af077639144ad0175908d57c0bc.scope.
Dec  1 04:53:20 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:53:20 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/492090ff7399f7b2779a21962dccd9f24f25f207b71126f9cc58d9523c78fdee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 04:53:20 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/492090ff7399f7b2779a21962dccd9f24f25f207b71126f9cc58d9523c78fdee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:53:20 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/492090ff7399f7b2779a21962dccd9f24f25f207b71126f9cc58d9523c78fdee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:53:20 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/492090ff7399f7b2779a21962dccd9f24f25f207b71126f9cc58d9523c78fdee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 04:53:20 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/492090ff7399f7b2779a21962dccd9f24f25f207b71126f9cc58d9523c78fdee/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 04:53:21 np0005540825 podman[106689]: 2025-12-01 09:53:21.05002944 +0000 UTC m=+0.787172911 container init 42cc258d5f4631beaa20fc63576d4e2488628af077639144ad0175908d57c0bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:53:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:21 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:21 np0005540825 podman[106689]: 2025-12-01 09:53:21.060230891 +0000 UTC m=+0.797374322 container start 42cc258d5f4631beaa20fc63576d4e2488628af077639144ad0175908d57c0bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:53:21 np0005540825 podman[106689]: 2025-12-01 09:53:21.064044622 +0000 UTC m=+0.801188073 container attach 42cc258d5f4631beaa20fc63576d4e2488628af077639144ad0175908d57c0bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:53:21 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v50: 353 pgs: 1 active+remapped, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Dec  1 04:53:21 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Dec  1 04:53:21 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Dec  1 04:53:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:53:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:53:21.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:53:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:53:21] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Dec  1 04:53:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:53:21] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Dec  1 04:53:21 np0005540825 crazy_blackwell[106705]: --> passed data devices: 0 physical, 1 LVM
Dec  1 04:53:21 np0005540825 crazy_blackwell[106705]: --> All data devices are unavailable
Dec  1 04:53:21 np0005540825 systemd[1]: libpod-42cc258d5f4631beaa20fc63576d4e2488628af077639144ad0175908d57c0bc.scope: Deactivated successfully.
Dec  1 04:53:21 np0005540825 podman[106689]: 2025-12-01 09:53:21.433777879 +0000 UTC m=+1.170921310 container died 42cc258d5f4631beaa20fc63576d4e2488628af077639144ad0175908d57c0bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  1 04:53:21 np0005540825 systemd[1]: var-lib-containers-storage-overlay-492090ff7399f7b2779a21962dccd9f24f25f207b71126f9cc58d9523c78fdee-merged.mount: Deactivated successfully.
Dec  1 04:53:21 np0005540825 podman[106689]: 2025-12-01 09:53:21.533117654 +0000 UTC m=+1.270261085 container remove 42cc258d5f4631beaa20fc63576d4e2488628af077639144ad0175908d57c0bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec  1 04:53:21 np0005540825 systemd[1]: libpod-conmon-42cc258d5f4631beaa20fc63576d4e2488628af077639144ad0175908d57c0bc.scope: Deactivated successfully.
Dec  1 04:53:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:21 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:21 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Dec  1 04:53:21 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Dec  1 04:53:21 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec  1 04:53:21 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Dec  1 04:53:21 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 125 pg[10.1b( v 56'1015 (0'0,56'1015] local-lis/les=95/96 n=2 ec=61/50 lis/c=95/95 les/c/f=96/96/0 sis=125 pruub=15.202398300s) [0] r=-1 lpr=125 pi=[95,125)/1 crt=56'1015 mlcod 0'0 active pruub 309.555633545s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:53:21 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 125 pg[10.1b( v 56'1015 (0'0,56'1015] local-lis/les=95/96 n=2 ec=61/50 lis/c=95/95 les/c/f=96/96/0 sis=125 pruub=15.202365875s) [0] r=-1 lpr=125 pi=[95,125)/1 crt=56'1015 mlcod 0'0 unknown NOTIFY pruub 309.555633545s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:53:21 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Dec  1 04:53:22 np0005540825 podman[106827]: 2025-12-01 09:53:22.157795383 +0000 UTC m=+0.043815393 container create 5e640d05a46ec8bfcbdeb035a609b8a76d56ccc90ad1c7adb9440021c1faf8c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  1 04:53:22 np0005540825 systemd[1]: Started libpod-conmon-5e640d05a46ec8bfcbdeb035a609b8a76d56ccc90ad1c7adb9440021c1faf8c8.scope.
Dec  1 04:53:22 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:53:22 np0005540825 podman[106827]: 2025-12-01 09:53:22.137037322 +0000 UTC m=+0.023057362 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:53:22 np0005540825 podman[106827]: 2025-12-01 09:53:22.247176044 +0000 UTC m=+0.133196034 container init 5e640d05a46ec8bfcbdeb035a609b8a76d56ccc90ad1c7adb9440021c1faf8c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_bardeen, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:53:22 np0005540825 podman[106827]: 2025-12-01 09:53:22.255433883 +0000 UTC m=+0.141453863 container start 5e640d05a46ec8bfcbdeb035a609b8a76d56ccc90ad1c7adb9440021c1faf8c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:53:22 np0005540825 podman[106827]: 2025-12-01 09:53:22.2591166 +0000 UTC m=+0.145136780 container attach 5e640d05a46ec8bfcbdeb035a609b8a76d56ccc90ad1c7adb9440021c1faf8c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_bardeen, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:53:22 np0005540825 naughty_bardeen[106843]: 167 167
Dec  1 04:53:22 np0005540825 systemd[1]: libpod-5e640d05a46ec8bfcbdeb035a609b8a76d56ccc90ad1c7adb9440021c1faf8c8.scope: Deactivated successfully.
Dec  1 04:53:22 np0005540825 podman[106827]: 2025-12-01 09:53:22.263008724 +0000 UTC m=+0.149028714 container died 5e640d05a46ec8bfcbdeb035a609b8a76d56ccc90ad1c7adb9440021c1faf8c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_bardeen, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:53:22 np0005540825 systemd[1]: var-lib-containers-storage-overlay-5fd94f48c7c6fc7f88ff90a8229015aa2f9b878a251b0f72b7934465cf6e3a23-merged.mount: Deactivated successfully.
Dec  1 04:53:22 np0005540825 podman[106827]: 2025-12-01 09:53:22.297286323 +0000 UTC m=+0.183306313 container remove 5e640d05a46ec8bfcbdeb035a609b8a76d56ccc90ad1c7adb9440021c1faf8c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_bardeen, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  1 04:53:22 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:22 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20003d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:22 np0005540825 systemd[1]: libpod-conmon-5e640d05a46ec8bfcbdeb035a609b8a76d56ccc90ad1c7adb9440021c1faf8c8.scope: Deactivated successfully.
Dec  1 04:53:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 04:53:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:53:22.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 04:53:22 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/095322 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 04:53:22 np0005540825 podman[106868]: 2025-12-01 09:53:22.503821611 +0000 UTC m=+0.058002499 container create ac3e97fe20d0f2d4e82d221ffc6ef3e852e457fb2834210d4a9cf09b6615b095 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_almeida, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True)
Dec  1 04:53:22 np0005540825 systemd[1]: Started libpod-conmon-ac3e97fe20d0f2d4e82d221ffc6ef3e852e457fb2834210d4a9cf09b6615b095.scope.
Dec  1 04:53:22 np0005540825 podman[106868]: 2025-12-01 09:53:22.482557257 +0000 UTC m=+0.036738175 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:53:22 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:53:22 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10a91cddfb09a6b8e9ebe20d92c1d7668cc01b43220e1d475ef282c034bac952/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 04:53:22 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10a91cddfb09a6b8e9ebe20d92c1d7668cc01b43220e1d475ef282c034bac952/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:53:22 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10a91cddfb09a6b8e9ebe20d92c1d7668cc01b43220e1d475ef282c034bac952/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:53:22 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10a91cddfb09a6b8e9ebe20d92c1d7668cc01b43220e1d475ef282c034bac952/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 04:53:22 np0005540825 podman[106868]: 2025-12-01 09:53:22.635558596 +0000 UTC m=+0.189739564 container init ac3e97fe20d0f2d4e82d221ffc6ef3e852e457fb2834210d4a9cf09b6615b095 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_almeida, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Dec  1 04:53:22 np0005540825 podman[106868]: 2025-12-01 09:53:22.646387023 +0000 UTC m=+0.200567931 container start ac3e97fe20d0f2d4e82d221ffc6ef3e852e457fb2834210d4a9cf09b6615b095 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True)
Dec  1 04:53:22 np0005540825 podman[106868]: 2025-12-01 09:53:22.651522759 +0000 UTC m=+0.205703677 container attach ac3e97fe20d0f2d4e82d221ffc6ef3e852e457fb2834210d4a9cf09b6615b095 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:53:22 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Dec  1 04:53:22 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Dec  1 04:53:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 126 pg[10.1b( v 56'1015 (0'0,56'1015] local-lis/les=95/96 n=2 ec=61/50 lis/c=95/95 les/c/f=96/96/0 sis=126) [0]/[1] r=0 lpr=126 pi=[95,126)/1 crt=56'1015 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:53:22 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 126 pg[10.1b( v 56'1015 (0'0,56'1015] local-lis/les=95/96 n=2 ec=61/50 lis/c=95/95 les/c/f=96/96/0 sis=126) [0]/[1] r=0 lpr=126 pi=[95,126)/1 crt=56'1015 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  1 04:53:22 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Dec  1 04:53:22 np0005540825 nice_almeida[106884]: {
Dec  1 04:53:22 np0005540825 nice_almeida[106884]:    "1": [
Dec  1 04:53:22 np0005540825 nice_almeida[106884]:        {
Dec  1 04:53:22 np0005540825 nice_almeida[106884]:            "devices": [
Dec  1 04:53:22 np0005540825 nice_almeida[106884]:                "/dev/loop3"
Dec  1 04:53:22 np0005540825 nice_almeida[106884]:            ],
Dec  1 04:53:22 np0005540825 nice_almeida[106884]:            "lv_name": "ceph_lv0",
Dec  1 04:53:22 np0005540825 nice_almeida[106884]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 04:53:22 np0005540825 nice_almeida[106884]:            "lv_size": "21470642176",
Dec  1 04:53:22 np0005540825 nice_almeida[106884]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 04:53:22 np0005540825 nice_almeida[106884]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 04:53:22 np0005540825 nice_almeida[106884]:            "name": "ceph_lv0",
Dec  1 04:53:22 np0005540825 nice_almeida[106884]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 04:53:22 np0005540825 nice_almeida[106884]:            "tags": {
Dec  1 04:53:22 np0005540825 nice_almeida[106884]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 04:53:22 np0005540825 nice_almeida[106884]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 04:53:22 np0005540825 nice_almeida[106884]:                "ceph.cephx_lockbox_secret": "",
Dec  1 04:53:22 np0005540825 nice_almeida[106884]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 04:53:22 np0005540825 nice_almeida[106884]:                "ceph.cluster_name": "ceph",
Dec  1 04:53:22 np0005540825 nice_almeida[106884]:                "ceph.crush_device_class": "",
Dec  1 04:53:22 np0005540825 nice_almeida[106884]:                "ceph.encrypted": "0",
Dec  1 04:53:22 np0005540825 nice_almeida[106884]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 04:53:22 np0005540825 nice_almeida[106884]:                "ceph.osd_id": "1",
Dec  1 04:53:22 np0005540825 nice_almeida[106884]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 04:53:22 np0005540825 nice_almeida[106884]:                "ceph.type": "block",
Dec  1 04:53:22 np0005540825 nice_almeida[106884]:                "ceph.vdo": "0",
Dec  1 04:53:22 np0005540825 nice_almeida[106884]:                "ceph.with_tpm": "0"
Dec  1 04:53:22 np0005540825 nice_almeida[106884]:            },
Dec  1 04:53:22 np0005540825 nice_almeida[106884]:            "type": "block",
Dec  1 04:53:22 np0005540825 nice_almeida[106884]:            "vg_name": "ceph_vg0"
Dec  1 04:53:22 np0005540825 nice_almeida[106884]:        }
Dec  1 04:53:22 np0005540825 nice_almeida[106884]:    ]
Dec  1 04:53:22 np0005540825 nice_almeida[106884]: }
Dec  1 04:53:22 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec  1 04:53:23 np0005540825 systemd[1]: libpod-ac3e97fe20d0f2d4e82d221ffc6ef3e852e457fb2834210d4a9cf09b6615b095.scope: Deactivated successfully.
Dec  1 04:53:23 np0005540825 podman[106868]: 2025-12-01 09:53:23.0066653 +0000 UTC m=+0.560846218 container died ac3e97fe20d0f2d4e82d221ffc6ef3e852e457fb2834210d4a9cf09b6615b095 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_almeida, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:53:23 np0005540825 systemd[1]: var-lib-containers-storage-overlay-10a91cddfb09a6b8e9ebe20d92c1d7668cc01b43220e1d475ef282c034bac952-merged.mount: Deactivated successfully.
Dec  1 04:53:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:23 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20003d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:23 np0005540825 podman[106868]: 2025-12-01 09:53:23.065639744 +0000 UTC m=+0.619820622 container remove ac3e97fe20d0f2d4e82d221ffc6ef3e852e457fb2834210d4a9cf09b6615b095 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_almeida, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:53:23 np0005540825 systemd[1]: libpod-conmon-ac3e97fe20d0f2d4e82d221ffc6ef3e852e457fb2834210d4a9cf09b6615b095.scope: Deactivated successfully.
Dec  1 04:53:23 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v53: 353 pgs: 1 active+remapped, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Dec  1 04:53:23 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Dec  1 04:53:23 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Dec  1 04:53:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:53:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:53:23.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:53:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:23 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20003d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:23 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:53:23 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Dec  1 04:53:23 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec  1 04:53:23 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Dec  1 04:53:23 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Dec  1 04:53:23 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 127 pg[10.1b( v 56'1015 (0'0,56'1015] local-lis/les=126/127 n=2 ec=61/50 lis/c=95/95 les/c/f=96/96/0 sis=126) [0]/[1] async=[0] r=0 lpr=126 pi=[95,126)/1 crt=56'1015 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:53:23 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Dec  1 04:53:23 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec  1 04:53:24 np0005540825 podman[106997]: 2025-12-01 09:53:24.101138201 +0000 UTC m=+0.067676906 container create 1b5e967079c3808786916dcdfa6ef80e64a202ccbd9a88989e1d1ffb00064126 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_tesla, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec  1 04:53:24 np0005540825 systemd[1]: Started libpod-conmon-1b5e967079c3808786916dcdfa6ef80e64a202ccbd9a88989e1d1ffb00064126.scope.
Dec  1 04:53:24 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:53:24 np0005540825 podman[106997]: 2025-12-01 09:53:24.07622421 +0000 UTC m=+0.042762955 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:53:24 np0005540825 podman[106997]: 2025-12-01 09:53:24.174429435 +0000 UTC m=+0.140968160 container init 1b5e967079c3808786916dcdfa6ef80e64a202ccbd9a88989e1d1ffb00064126 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_tesla, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:53:24 np0005540825 podman[106997]: 2025-12-01 09:53:24.181208355 +0000 UTC m=+0.147747050 container start 1b5e967079c3808786916dcdfa6ef80e64a202ccbd9a88989e1d1ffb00064126 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_tesla, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  1 04:53:24 np0005540825 podman[106997]: 2025-12-01 09:53:24.18481497 +0000 UTC m=+0.151353695 container attach 1b5e967079c3808786916dcdfa6ef80e64a202ccbd9a88989e1d1ffb00064126 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:53:24 np0005540825 nifty_tesla[107013]: 167 167
Dec  1 04:53:24 np0005540825 systemd[1]: libpod-1b5e967079c3808786916dcdfa6ef80e64a202ccbd9a88989e1d1ffb00064126.scope: Deactivated successfully.
Dec  1 04:53:24 np0005540825 podman[106997]: 2025-12-01 09:53:24.186860195 +0000 UTC m=+0.153398900 container died 1b5e967079c3808786916dcdfa6ef80e64a202ccbd9a88989e1d1ffb00064126 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  1 04:53:24 np0005540825 systemd[1]: var-lib-containers-storage-overlay-8c559f1342c161a34dc6e4a3d9027d549ff503f37fe38bda70f787cd8337eba2-merged.mount: Deactivated successfully.
Dec  1 04:53:24 np0005540825 podman[106997]: 2025-12-01 09:53:24.232231348 +0000 UTC m=+0.198770063 container remove 1b5e967079c3808786916dcdfa6ef80e64a202ccbd9a88989e1d1ffb00064126 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_tesla, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec  1 04:53:24 np0005540825 systemd[1]: libpod-conmon-1b5e967079c3808786916dcdfa6ef80e64a202ccbd9a88989e1d1ffb00064126.scope: Deactivated successfully.
Dec  1 04:53:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:24 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20003d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:53:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:53:24.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:53:24 np0005540825 podman[107036]: 2025-12-01 09:53:24.460446802 +0000 UTC m=+0.074277372 container create d38cf7c40ab987dfd95fd51340e15cd4eecfde366173e2541c99d18fbdaef355 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec  1 04:53:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 04:53:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 04:53:24 np0005540825 systemd[1]: Started libpod-conmon-d38cf7c40ab987dfd95fd51340e15cd4eecfde366173e2541c99d18fbdaef355.scope.
Dec  1 04:53:24 np0005540825 podman[107036]: 2025-12-01 09:53:24.43889086 +0000 UTC m=+0.052721430 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:53:24 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:53:24 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aab4cfd91283ea4f5ffb385b98d65692145842d447069c0797df82e73f51c2fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 04:53:24 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aab4cfd91283ea4f5ffb385b98d65692145842d447069c0797df82e73f51c2fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:53:24 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aab4cfd91283ea4f5ffb385b98d65692145842d447069c0797df82e73f51c2fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:53:24 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aab4cfd91283ea4f5ffb385b98d65692145842d447069c0797df82e73f51c2fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 04:53:24 np0005540825 podman[107036]: 2025-12-01 09:53:24.567706567 +0000 UTC m=+0.181537107 container init d38cf7c40ab987dfd95fd51340e15cd4eecfde366173e2541c99d18fbdaef355 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_bhabha, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  1 04:53:24 np0005540825 podman[107036]: 2025-12-01 09:53:24.580352742 +0000 UTC m=+0.194183272 container start d38cf7c40ab987dfd95fd51340e15cd4eecfde366173e2541c99d18fbdaef355 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_bhabha, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec  1 04:53:24 np0005540825 podman[107036]: 2025-12-01 09:53:24.58405282 +0000 UTC m=+0.197883350 container attach d38cf7c40ab987dfd95fd51340e15cd4eecfde366173e2541c99d18fbdaef355 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  1 04:53:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Dec  1 04:53:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Dec  1 04:53:25 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Dec  1 04:53:25 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 128 pg[10.1b( v 56'1015 (0'0,56'1015] local-lis/les=126/127 n=2 ec=61/50 lis/c=126/95 les/c/f=127/96/0 sis=128 pruub=14.917758942s) [0] async=[0] r=-1 lpr=128 pi=[95,128)/1 crt=56'1015 mlcod 56'1015 active pruub 312.411407471s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:53:25 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 128 pg[10.1b( v 56'1015 (0'0,56'1015] local-lis/les=126/127 n=2 ec=61/50 lis/c=126/95 les/c/f=127/96/0 sis=128 pruub=14.917716026s) [0] r=-1 lpr=128 pi=[95,128)/1 crt=56'1015 mlcod 0'0 unknown NOTIFY pruub 312.411407471s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:53:25 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:25 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20003d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:25 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v56: 353 pgs: 1 active+remapped, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  1 04:53:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Dec  1 04:53:25 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Dec  1 04:53:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:53:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:53:25.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:53:25 np0005540825 lvm[107128]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 04:53:25 np0005540825 lvm[107128]: VG ceph_vg0 finished
Dec  1 04:53:25 np0005540825 focused_bhabha[107052]: {}
Dec  1 04:53:25 np0005540825 systemd[1]: libpod-d38cf7c40ab987dfd95fd51340e15cd4eecfde366173e2541c99d18fbdaef355.scope: Deactivated successfully.
Dec  1 04:53:25 np0005540825 podman[107036]: 2025-12-01 09:53:25.393901522 +0000 UTC m=+1.007732052 container died d38cf7c40ab987dfd95fd51340e15cd4eecfde366173e2541c99d18fbdaef355 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:53:25 np0005540825 systemd[1]: libpod-d38cf7c40ab987dfd95fd51340e15cd4eecfde366173e2541c99d18fbdaef355.scope: Consumed 1.126s CPU time.
Dec  1 04:53:25 np0005540825 systemd[1]: var-lib-containers-storage-overlay-aab4cfd91283ea4f5ffb385b98d65692145842d447069c0797df82e73f51c2fa-merged.mount: Deactivated successfully.
Dec  1 04:53:25 np0005540825 podman[107036]: 2025-12-01 09:53:25.447281998 +0000 UTC m=+1.061112528 container remove d38cf7c40ab987dfd95fd51340e15cd4eecfde366173e2541c99d18fbdaef355 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:53:25 np0005540825 systemd[1]: libpod-conmon-d38cf7c40ab987dfd95fd51340e15cd4eecfde366173e2541c99d18fbdaef355.scope: Deactivated successfully.
Dec  1 04:53:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:53:25 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:53:25 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:25 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:25 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad08004380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:26 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Dec  1 04:53:26 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Dec  1 04:53:26 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:26 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:53:26 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec  1 04:53:26 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Dec  1 04:53:26 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Dec  1 04:53:26 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:26 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad34003df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:53:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:53:26.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:53:26 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:53:26.956Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:53:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:27 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:27 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec  1 04:53:27 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v58: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 475 B/s rd, 0 op/s; 25 B/s, 0 objects/s recovering
Dec  1 04:53:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:53:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:53:27.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:53:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:27 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20003d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:28 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad08004380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:53:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:53:28.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:53:28 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:53:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:29 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad08004380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:29 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v59: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Dec  1 04:53:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:53:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:53:29.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:53:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:29 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:30 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:30 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20003d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:53:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:53:30.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:53:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:31 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20003d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:31 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v60: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 982 B/s rd, 140 B/s wr, 1 op/s; 15 B/s, 0 objects/s recovering
Dec  1 04:53:31 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Dec  1 04:53:31 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Dec  1 04:53:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:53:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:53:31.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:53:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:53:31] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Dec  1 04:53:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:53:31] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Dec  1 04:53:31 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Dec  1 04:53:31 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Dec  1 04:53:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:31 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20003d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:31 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec  1 04:53:31 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Dec  1 04:53:31 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Dec  1 04:53:31 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 130 pg[10.1e( v 56'1015 (0'0,56'1015] local-lis/les=79/80 n=5 ec=61/50 lis/c=79/79 les/c/f=80/80/0 sis=130 pruub=14.754728317s) [2] r=-1 lpr=130 pi=[79,130)/1 crt=56'1015 mlcod 0'0 active pruub 318.906066895s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:53:31 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 130 pg[10.1e( v 56'1015 (0'0,56'1015] local-lis/les=79/80 n=5 ec=61/50 lis/c=79/79 les/c/f=80/80/0 sis=130 pruub=14.754609108s) [2] r=-1 lpr=130 pi=[79,130)/1 crt=56'1015 mlcod 0'0 unknown NOTIFY pruub 318.906066895s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:53:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:31 : epoch 692d650f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 04:53:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:32 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:53:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:53:32.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:53:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Dec  1 04:53:32 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec  1 04:53:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Dec  1 04:53:32 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Dec  1 04:53:32 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 131 pg[10.1e( v 56'1015 (0'0,56'1015] local-lis/les=79/80 n=5 ec=61/50 lis/c=79/79 les/c/f=80/80/0 sis=131) [2]/[1] r=0 lpr=131 pi=[79,131)/1 crt=56'1015 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:53:32 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 131 pg[10.1e( v 56'1015 (0'0,56'1015] local-lis/les=79/80 n=5 ec=61/50 lis/c=79/79 les/c/f=80/80/0 sis=131) [2]/[1] r=0 lpr=131 pi=[79,131)/1 crt=56'1015 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  1 04:53:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:33 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad04002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:33 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v63: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 713 B/s rd, 142 B/s wr, 0 op/s
Dec  1 04:53:33 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  1 04:53:33 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  1 04:53:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:53:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:53:33.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:53:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:33 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad08004520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:33 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Dec  1 04:53:33 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  1 04:53:33 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Dec  1 04:53:33 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  1 04:53:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 132 pg[10.1f( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=105/105 les/c/f=106/106/0 sis=132) [1] r=0 lpr=132 pi=[105,132)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:53:33 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Dec  1 04:53:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 132 pg[10.1e( v 56'1015 (0'0,56'1015] local-lis/les=131/132 n=5 ec=61/50 lis/c=79/79 les/c/f=80/80/0 sis=131) [2]/[1] async=[2] r=0 lpr=131 pi=[79,131)/1 crt=56'1015 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:53:33 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:53:33 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Dec  1 04:53:33 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Dec  1 04:53:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 133 pg[10.1f( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=105/105 les/c/f=106/106/0 sis=133) [1]/[2] r=-1 lpr=133 pi=[105,133)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:53:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 133 pg[10.1f( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=105/105 les/c/f=106/106/0 sis=133) [1]/[2] r=-1 lpr=133 pi=[105,133)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  1 04:53:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 133 pg[10.1e( v 56'1015 (0'0,56'1015] local-lis/les=131/132 n=5 ec=61/50 lis/c=131/79 les/c/f=132/80/0 sis=133 pruub=15.814307213s) [2] async=[2] r=-1 lpr=133 pi=[79,133)/1 crt=56'1015 mlcod 56'1015 active pruub 322.207336426s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:53:33 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 133 pg[10.1e( v 56'1015 (0'0,56'1015] local-lis/les=131/132 n=5 ec=61/50 lis/c=131/79 les/c/f=132/80/0 sis=133 pruub=15.814217567s) [2] r=-1 lpr=133 pi=[79,133)/1 crt=56'1015 mlcod 0'0 unknown NOTIFY pruub 322.207336426s@ mbc={}] state<Start>: transitioning to Stray
Dec  1 04:53:33 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Dec  1 04:53:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:34 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20003d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:53:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:53:34.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:53:34 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  1 04:53:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:34 : epoch 692d650f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 04:53:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:34 : epoch 692d650f : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 04:53:34 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Dec  1 04:53:34 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Dec  1 04:53:34 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Dec  1 04:53:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:35 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:35 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v67: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Dec  1 04:53:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 04:53:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:53:35.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 04:53:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:35 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad04002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Dec  1 04:53:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Dec  1 04:53:36 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 135 pg[10.1f( v 56'1015 (0'0,56'1015] local-lis/les=0/0 n=5 ec=61/50 lis/c=133/105 les/c/f=134/106/0 sis=135) [1] r=0 lpr=135 pi=[105,135)/1 luod=0'0 crt=56'1015 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  1 04:53:36 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 135 pg[10.1f( v 56'1015 (0'0,56'1015] local-lis/les=0/0 n=5 ec=61/50 lis/c=133/105 les/c/f=134/106/0 sis=135) [1] r=0 lpr=135 pi=[105,135)/1 crt=56'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  1 04:53:36 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Dec  1 04:53:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:36 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad08004520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:53:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:53:36.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:53:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:53:36.957Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 04:53:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:53:36.959Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:53:36 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Dec  1 04:53:37 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Dec  1 04:53:37 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Dec  1 04:53:37 np0005540825 ceph-osd[82809]: osd.1 pg_epoch: 136 pg[10.1f( v 56'1015 (0'0,56'1015] local-lis/les=135/136 n=5 ec=61/50 lis/c=133/105 les/c/f=134/106/0 sis=135) [1] r=0 lpr=135 pi=[105,135)/1 crt=56'1015 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  1 04:53:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:37 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20003d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:37 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v70: 353 pgs: 1 active+remapped, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 5.4 KiB/s rd, 2.9 KiB/s wr, 8 op/s; 31 B/s, 2 objects/s recovering
Dec  1 04:53:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:53:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:53:37.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:53:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:37 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:37 : epoch 692d650f : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  1 04:53:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:38 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad04002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 04:53:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:53:38.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 04:53:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:53:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:39 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad08004520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v71: 353 pgs: 1 active+remapped, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 1.9 KiB/s wr, 5 op/s; 20 B/s, 1 objects/s recovering
Dec  1 04:53:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:53:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:53:39.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_09:53:39
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['default.rgw.log', '.nfs', '.mgr', 'backups', 'default.rgw.meta', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'vms', '.rgw.root', 'cephfs.cephfs.meta', 'images']
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.225674773718825e-06 of space, bias 1.0, pg target 0.0006677024321156476 quantized to 32 (current 32)
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 04:53:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 04:53:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 04:53:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 04:53:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:39 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20003d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:40 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:40 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 04:53:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:53:40.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 04:53:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:41 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad04003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:41 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v72: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.8 KiB/s wr, 5 op/s; 17 B/s, 1 objects/s recovering
Dec  1 04:53:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:53:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:53:41.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:53:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:53:41] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Dec  1 04:53:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:53:41] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Dec  1 04:53:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:41 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad08004520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:42 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:42 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad140014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 04:53:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:53:42.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 04:53:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:43 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:43 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v73: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.4 KiB/s wr, 4 op/s; 13 B/s, 1 objects/s recovering
Dec  1 04:53:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:53:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:53:43.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:53:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:43 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad04003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:43 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:53:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:44 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad08004520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:53:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:53:44.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:53:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/095344 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  1 04:53:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:45 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad140014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:45 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v74: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 331 B/s rd, 110 B/s wr, 0 op/s
Dec  1 04:53:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:53:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:53:45.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:53:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:45 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:46 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad04003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:53:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:53:46.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:53:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:53:46.962Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:53:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:47 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad08004520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:47 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v75: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 399 B/s rd, 99 B/s wr, 0 op/s
Dec  1 04:53:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:53:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:53:47.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:53:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:47 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad140014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:48 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:53:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:53:48.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:53:48 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:53:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:49 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:49 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v76: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Dec  1 04:53:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:53:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:53:49.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:53:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:49 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad08004520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:50 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:50 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad140014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:53:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:53:50.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:53:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:51 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:51 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v77: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec  1 04:53:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:53:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:53:51.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:53:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:53:51] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Dec  1 04:53:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:53:51] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Dec  1 04:53:51 np0005540825 python3.9[107402]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:53:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:51 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad04003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:52 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad08004520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:53:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:53:52.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:53:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:53 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad140014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:53 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v78: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec  1 04:53:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 04:53:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:53:53.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 04:53:53 np0005540825 python3.9[107691]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec  1 04:53:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:53 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad140014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:53 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:53:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:54 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad140014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:53:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:53:54.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:53:54 np0005540825 python3.9[107843]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec  1 04:53:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 04:53:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 04:53:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:55 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad08004520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:55 np0005540825 python3.9[107995]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:53:55 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v79: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec  1 04:53:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:53:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:53:55.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:53:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:55 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:56 np0005540825 python3.9[108149]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec  1 04:53:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:56 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad04003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:53:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:53:56.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:53:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:53:56.965Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 04:53:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:53:56.965Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 04:53:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:53:56.966Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:53:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:57 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad140014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:57 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v80: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec  1 04:53:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:53:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:53:57.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:53:57 np0005540825 python3.9[108328]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:53:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:57 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad08004540 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:58 np0005540825 python3.9[108480]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:53:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:58 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:53:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:53:58.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:53:58 np0005540825 python3.9[108558]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:53:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:53:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:59 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad04003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:53:59 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v81: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:53:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:53:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 04:53:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:53:59.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 04:53:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:53:59 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14001670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:00 np0005540825 python3.9[108712]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:54:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:00 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad08004560 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 04:54:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:54:00.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 04:54:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:01 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:01 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v82: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  1 04:54:01 np0005540825 python3.9[108867]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec  1 04:54:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:54:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:54:01.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:54:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:54:01] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Dec  1 04:54:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:54:01] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Dec  1 04:54:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:01 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad04003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:02 np0005540825 python3.9[109021]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec  1 04:54:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:02 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14004a90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 04:54:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:54:02.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 04:54:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:03 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14004a90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:03 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v83: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:54:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:54:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:54:03.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:54:03 np0005540825 python3.9[109175]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  1 04:54:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:03 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad04003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:03 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:54:04 np0005540825 python3.9[109329]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec  1 04:54:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:04 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad34001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 04:54:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:54:04.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 04:54:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:05 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14004a90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:05 np0005540825 python3.9[109481]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 04:54:05 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v84: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:54:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:54:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:54:05.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:54:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:05 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad080045c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:06 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad04003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:54:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:54:06.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:54:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:54:06.967Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:54:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:07 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad34001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:07 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v85: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  1 04:54:07 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:07 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:54:07 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:54:07.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:54:07 np0005540825 python3.9[109637]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:54:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:07 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14004a90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:08 np0005540825 python3.9[109790]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:54:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:08 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad080045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:54:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:54:08.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:54:08 np0005540825 python3.9[109868]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:54:08 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:54:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:09 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad04003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:09 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v86: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:54:09 np0005540825 python3.9[110021]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:54:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:54:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:54:09.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:54:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 04:54:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 04:54:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:54:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:54:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:54:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f5420a9fa90>)]
Dec  1 04:54:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec  1 04:54:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:54:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f5420a9fa00>)]
Dec  1 04:54:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec  1 04:54:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:09 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad34001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:09 np0005540825 python3.9[110100]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:54:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:10 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14004a90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:54:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:54:10.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:54:10 np0005540825 python3.9[110252]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 04:54:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:11 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad08004600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:11 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v87: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 255 B/s wr, 0 op/s
Dec  1 04:54:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:54:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:54:11.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:54:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:54:11] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Dec  1 04:54:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:54:11] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Dec  1 04:54:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:11 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad04003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:11 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : mgrmap e33: compute-0.fospow(active, since 93s), standbys: compute-1.ymizfm, compute-2.kdtkls
Dec  1 04:54:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:12 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad34001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:54:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:54:12.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:54:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:13 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14004a90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:13 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v88: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Dec  1 04:54:13 np0005540825 python3.9[110407]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:54:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:54:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:54:13.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:54:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:13 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20000dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:13 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:54:14 np0005540825 python3.9[110560]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec  1 04:54:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:14 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad04003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:54:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:54:14.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:54:14 np0005540825 python3.9[110710]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:54:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:15 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad340041e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:15 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v89: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Dec  1 04:54:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 04:54:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:54:15.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 04:54:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:15 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14004a90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:16 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20000dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:16 np0005540825 python3.9[110864]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:54:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 04:54:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:54:16.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 04:54:16 np0005540825 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec  1 04:54:16 np0005540825 systemd[1]: tuned.service: Deactivated successfully.
Dec  1 04:54:16 np0005540825 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec  1 04:54:16 np0005540825 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  1 04:54:16 np0005540825 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  1 04:54:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:54:16.968Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 04:54:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:54:16.969Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 04:54:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:17 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20000dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:17 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v90: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 255 B/s wr, 0 op/s
Dec  1 04:54:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:54:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:54:17.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:54:17 np0005540825 python3.9[111052]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec  1 04:54:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:17 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad340041e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:18 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14004a90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:54:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:54:18.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:54:18 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:54:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:19 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20002c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:19 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v91: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Dec  1 04:54:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:54:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:54:19.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:54:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:19 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20002c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:20 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad340041e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:54:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:54:20.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:54:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:21 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14004a90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:21 np0005540825 python3.9[111206]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:54:21 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v92: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 255 B/s wr, 0 op/s
Dec  1 04:54:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 04:54:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:54:21.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 04:54:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:54:21] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Dec  1 04:54:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:54:21] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Dec  1 04:54:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:21 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14004a90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:22 np0005540825 python3.9[111362]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:54:22 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:22 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad04003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:54:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:54:22.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:54:22 np0005540825 systemd[1]: session-39.scope: Deactivated successfully.
Dec  1 04:54:22 np0005540825 systemd[1]: session-39.scope: Consumed 1min 7.125s CPU time.
Dec  1 04:54:22 np0005540825 systemd-logind[789]: Session 39 logged out. Waiting for processes to exit.
Dec  1 04:54:22 np0005540825 systemd-logind[789]: Removed session 39.
Dec  1 04:54:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:23 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad340041e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:23 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v93: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:54:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:54:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:54:23.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:54:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:23 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20002c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:23 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:54:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:24 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14004a90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 04:54:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:54:24.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 04:54:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 04:54:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 04:54:25 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:25 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad04003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:25 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v94: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:54:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:54:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:54:25.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:54:25 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:25 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad340041e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:26 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:26 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20002c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:54:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:54:26.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:54:26 np0005540825 podman[111518]: 2025-12-01 09:54:26.507746105 +0000 UTC m=+0.079144859 container exec 04e54403a63b389bbbec1024baf07fe083822adf8debfd9c927a52a06f70b8a1 (image=quay.io/ceph/ceph:v19, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mon-compute-0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True)
Dec  1 04:54:26 np0005540825 podman[111518]: 2025-12-01 09:54:26.612341237 +0000 UTC m=+0.183739931 container exec_died 04e54403a63b389bbbec1024baf07fe083822adf8debfd9c927a52a06f70b8a1 (image=quay.io/ceph/ceph:v19, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  1 04:54:26 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:54:26.971Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:54:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:27 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14004a90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:27 np0005540825 podman[111639]: 2025-12-01 09:54:27.121095262 +0000 UTC m=+0.057577887 container exec 6f6cf01cf4add71c311676e9908aca30b90b94b7eb4eed46b57a6078721d520f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:54:27 np0005540825 podman[111639]: 2025-12-01 09:54:27.127291226 +0000 UTC m=+0.063773841 container exec_died 6f6cf01cf4add71c311676e9908aca30b90b94b7eb4eed46b57a6078721d520f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:54:27 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v95: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  1 04:54:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:54:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:54:27.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:54:27 np0005540825 podman[111730]: 2025-12-01 09:54:27.535615319 +0000 UTC m=+0.089148083 container exec 385d0b8a0770a5cfcc609cc2d998a61d24533494ce0bce025dda1e75042f6acf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:54:27 np0005540825 podman[111730]: 2025-12-01 09:54:27.548667675 +0000 UTC m=+0.102200429 container exec_died 385d0b8a0770a5cfcc609cc2d998a61d24533494ce0bce025dda1e75042f6acf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  1 04:54:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:27 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad04003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:27 np0005540825 podman[111793]: 2025-12-01 09:54:27.763367566 +0000 UTC m=+0.053156990 container exec 0ce6b28b78cdc773acbae8987038033199adf9f2d08be5b101f663b41bdbf569 (image=quay.io/ceph/haproxy:2.3, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd)
Dec  1 04:54:27 np0005540825 podman[111793]: 2025-12-01 09:54:27.77255562 +0000 UTC m=+0.062345034 container exec_died 0ce6b28b78cdc773acbae8987038033199adf9f2d08be5b101f663b41bdbf569 (image=quay.io/ceph/haproxy:2.3, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd)
Dec  1 04:54:28 np0005540825 podman[111862]: 2025-12-01 09:54:28.007119127 +0000 UTC m=+0.047374117 container exec a5bc912f6140365e8fac95a046d1f1cd854ca55aaf2d1e10454f7fa95d0346ac (image=quay.io/ceph/keepalived:2.2.4, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-nfs-cephfs-compute-0-gzwexr, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, vcs-type=git, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, description=keepalived for Ceph, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, build-date=2023-02-22T09:23:20, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived)
Dec  1 04:54:28 np0005540825 podman[111862]: 2025-12-01 09:54:28.020786699 +0000 UTC m=+0.061041679 container exec_died a5bc912f6140365e8fac95a046d1f1cd854ca55aaf2d1e10454f7fa95d0346ac (image=quay.io/ceph/keepalived:2.2.4, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-nfs-cephfs-compute-0-gzwexr, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.buildah.version=1.28.2, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  1 04:54:28 np0005540825 podman[111927]: 2025-12-01 09:54:28.249329917 +0000 UTC m=+0.063432382 container exec fa43ac72a8a6a2863fa517cbc53fe118714aa74f1d9b620c1e40de173c893c3c (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:54:28 np0005540825 systemd-logind[789]: New session 40 of user zuul.
Dec  1 04:54:28 np0005540825 systemd[1]: Started Session 40 of User zuul.
Dec  1 04:54:28 np0005540825 podman[111927]: 2025-12-01 09:54:28.314289999 +0000 UTC m=+0.128392444 container exec_died fa43ac72a8a6a2863fa517cbc53fe118714aa74f1d9b620c1e40de173c893c3c (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:54:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:28 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad340041e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:54:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:54:28.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:54:28 np0005540825 podman[112056]: 2025-12-01 09:54:28.530757827 +0000 UTC m=+0.049184745 container exec 2e1200771a4f85a610f0f173c3c2000346e63d85e37d815d1d1db9886b52c917 (image=quay.io/ceph/grafana:10.4.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 04:54:28 np0005540825 podman[112056]: 2025-12-01 09:54:28.693621264 +0000 UTC m=+0.212048162 container exec_died 2e1200771a4f85a610f0f173c3c2000346e63d85e37d815d1d1db9886b52c917 (image=quay.io/ceph/grafana:10.4.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 04:54:28 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:54:29 np0005540825 podman[112235]: 2025-12-01 09:54:29.087478483 +0000 UTC m=+0.055892612 container exec f4d1dfb280c04c299aa8be4743fa19bf2fe3a6e302067b3bdeba477b91d1a552 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:54:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:29 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20002c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:29 np0005540825 podman[112235]: 2025-12-01 09:54:29.134737776 +0000 UTC m=+0.103151865 container exec_died f4d1dfb280c04c299aa8be4743fa19bf2fe3a6e302067b3bdeba477b91d1a552 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:54:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:54:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:54:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:54:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:54:29 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v96: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:54:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 04:54:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:54:29.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 04:54:29 np0005540825 python3.9[112276]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:54:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:29 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14004a90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:54:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:54:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 04:54:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 04:54:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 04:54:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:54:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 04:54:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:54:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 04:54:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 04:54:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 04:54:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 04:54:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:54:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:54:30 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:54:30 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:54:30 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 04:54:30 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:54:30 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:54:30 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 04:54:30 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:30 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad04003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:54:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:54:30.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:54:30 np0005540825 podman[112614]: 2025-12-01 09:54:30.449885524 +0000 UTC m=+0.042189619 container create accbc746e537c6a7648e1e55771f66aaa3585657592dcb61c070134fc554fd1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:54:30 np0005540825 systemd[1]: Started libpod-conmon-accbc746e537c6a7648e1e55771f66aaa3585657592dcb61c070134fc554fd1f.scope.
Dec  1 04:54:30 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:54:30 np0005540825 podman[112614]: 2025-12-01 09:54:30.526567306 +0000 UTC m=+0.118871291 container init accbc746e537c6a7648e1e55771f66aaa3585657592dcb61c070134fc554fd1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_kepler, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec  1 04:54:30 np0005540825 podman[112614]: 2025-12-01 09:54:30.431925628 +0000 UTC m=+0.024229603 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:54:30 np0005540825 podman[112614]: 2025-12-01 09:54:30.534806885 +0000 UTC m=+0.127110850 container start accbc746e537c6a7648e1e55771f66aaa3585657592dcb61c070134fc554fd1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_kepler, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid)
Dec  1 04:54:30 np0005540825 podman[112614]: 2025-12-01 09:54:30.537715942 +0000 UTC m=+0.130019947 container attach accbc746e537c6a7648e1e55771f66aaa3585657592dcb61c070134fc554fd1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_kepler, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:54:30 np0005540825 systemd[1]: libpod-accbc746e537c6a7648e1e55771f66aaa3585657592dcb61c070134fc554fd1f.scope: Deactivated successfully.
Dec  1 04:54:30 np0005540825 frosty_kepler[112651]: 167 167
Dec  1 04:54:30 np0005540825 podman[112614]: 2025-12-01 09:54:30.542192971 +0000 UTC m=+0.134496936 container died accbc746e537c6a7648e1e55771f66aaa3585657592dcb61c070134fc554fd1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  1 04:54:30 np0005540825 conmon[112651]: conmon accbc746e537c6a7648e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-accbc746e537c6a7648e1e55771f66aaa3585657592dcb61c070134fc554fd1f.scope/container/memory.events
Dec  1 04:54:30 np0005540825 systemd[1]: var-lib-containers-storage-overlay-1306172c184df88ccf7cb04b29c3b36902782790fa065dd030291396318ee1b6-merged.mount: Deactivated successfully.
Dec  1 04:54:30 np0005540825 podman[112614]: 2025-12-01 09:54:30.595863193 +0000 UTC m=+0.188167158 container remove accbc746e537c6a7648e1e55771f66aaa3585657592dcb61c070134fc554fd1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_kepler, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  1 04:54:30 np0005540825 systemd[1]: libpod-conmon-accbc746e537c6a7648e1e55771f66aaa3585657592dcb61c070134fc554fd1f.scope: Deactivated successfully.
Dec  1 04:54:30 np0005540825 python3.9[112648]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec  1 04:54:30 np0005540825 podman[112699]: 2025-12-01 09:54:30.782973403 +0000 UTC m=+0.046727180 container create ca2adbf444403d585cc7ce35245d0a74b032321b00b151f7d42439cbdbf03b41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_jepsen, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  1 04:54:30 np0005540825 systemd[1]: Started libpod-conmon-ca2adbf444403d585cc7ce35245d0a74b032321b00b151f7d42439cbdbf03b41.scope.
Dec  1 04:54:30 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:54:30 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a81fc85573f16e2d1c3042383d4377179788f231cda0dbf6171570b6efebfab0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 04:54:30 np0005540825 podman[112699]: 2025-12-01 09:54:30.763937618 +0000 UTC m=+0.027691395 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:54:30 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a81fc85573f16e2d1c3042383d4377179788f231cda0dbf6171570b6efebfab0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:54:30 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a81fc85573f16e2d1c3042383d4377179788f231cda0dbf6171570b6efebfab0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:54:30 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a81fc85573f16e2d1c3042383d4377179788f231cda0dbf6171570b6efebfab0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 04:54:30 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a81fc85573f16e2d1c3042383d4377179788f231cda0dbf6171570b6efebfab0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 04:54:30 np0005540825 podman[112699]: 2025-12-01 09:54:30.885012297 +0000 UTC m=+0.148766154 container init ca2adbf444403d585cc7ce35245d0a74b032321b00b151f7d42439cbdbf03b41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_jepsen, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:54:30 np0005540825 podman[112699]: 2025-12-01 09:54:30.892222038 +0000 UTC m=+0.155975815 container start ca2adbf444403d585cc7ce35245d0a74b032321b00b151f7d42439cbdbf03b41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  1 04:54:30 np0005540825 podman[112699]: 2025-12-01 09:54:30.896849041 +0000 UTC m=+0.160602928 container attach ca2adbf444403d585cc7ce35245d0a74b032321b00b151f7d42439cbdbf03b41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec  1 04:54:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:31 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad340041e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:31 np0005540825 serene_jepsen[112716]: --> passed data devices: 0 physical, 1 LVM
Dec  1 04:54:31 np0005540825 serene_jepsen[112716]: --> All data devices are unavailable
Dec  1 04:54:31 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v97: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  1 04:54:31 np0005540825 systemd[1]: libpod-ca2adbf444403d585cc7ce35245d0a74b032321b00b151f7d42439cbdbf03b41.scope: Deactivated successfully.
Dec  1 04:54:31 np0005540825 podman[112699]: 2025-12-01 09:54:31.29899081 +0000 UTC m=+0.562744657 container died ca2adbf444403d585cc7ce35245d0a74b032321b00b151f7d42439cbdbf03b41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  1 04:54:31 np0005540825 systemd[1]: var-lib-containers-storage-overlay-a81fc85573f16e2d1c3042383d4377179788f231cda0dbf6171570b6efebfab0-merged.mount: Deactivated successfully.
Dec  1 04:54:31 np0005540825 podman[112699]: 2025-12-01 09:54:31.357608454 +0000 UTC m=+0.621362231 container remove ca2adbf444403d585cc7ce35245d0a74b032321b00b151f7d42439cbdbf03b41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_jepsen, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:54:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:54:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:54:31.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:54:31 np0005540825 systemd[1]: libpod-conmon-ca2adbf444403d585cc7ce35245d0a74b032321b00b151f7d42439cbdbf03b41.scope: Deactivated successfully.
Dec  1 04:54:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:54:31] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Dec  1 04:54:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:54:31] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Dec  1 04:54:31 np0005540825 python3.9[112872]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 04:54:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:31 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20004120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:31 np0005540825 podman[112970]: 2025-12-01 09:54:31.953205701 +0000 UTC m=+0.051442485 container create 6bb6dcd358145165531dd34e1e4c82d80f8f01fd6487334320a3db9c2b103dd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:54:31 np0005540825 systemd[1]: Started libpod-conmon-6bb6dcd358145165531dd34e1e4c82d80f8f01fd6487334320a3db9c2b103dd2.scope.
Dec  1 04:54:32 np0005540825 podman[112970]: 2025-12-01 09:54:31.926193485 +0000 UTC m=+0.024430349 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:54:32 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:54:32 np0005540825 podman[112970]: 2025-12-01 09:54:32.051431274 +0000 UTC m=+0.149668068 container init 6bb6dcd358145165531dd34e1e4c82d80f8f01fd6487334320a3db9c2b103dd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_bardeen, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  1 04:54:32 np0005540825 podman[112970]: 2025-12-01 09:54:32.057184407 +0000 UTC m=+0.155421171 container start 6bb6dcd358145165531dd34e1e4c82d80f8f01fd6487334320a3db9c2b103dd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_bardeen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec  1 04:54:32 np0005540825 podman[112970]: 2025-12-01 09:54:32.060877795 +0000 UTC m=+0.159114559 container attach 6bb6dcd358145165531dd34e1e4c82d80f8f01fd6487334320a3db9c2b103dd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_bardeen, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:54:32 np0005540825 stoic_bardeen[112986]: 167 167
Dec  1 04:54:32 np0005540825 systemd[1]: libpod-6bb6dcd358145165531dd34e1e4c82d80f8f01fd6487334320a3db9c2b103dd2.scope: Deactivated successfully.
Dec  1 04:54:32 np0005540825 podman[113012]: 2025-12-01 09:54:32.116094818 +0000 UTC m=+0.030767756 container died 6bb6dcd358145165531dd34e1e4c82d80f8f01fd6487334320a3db9c2b103dd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  1 04:54:32 np0005540825 systemd[1]: var-lib-containers-storage-overlay-f6dcb225feac5b2a2a683bbaf2592f40421c6bcaaf360bac9a910fdefd9c5733-merged.mount: Deactivated successfully.
Dec  1 04:54:32 np0005540825 podman[113012]: 2025-12-01 09:54:32.16784032 +0000 UTC m=+0.082513198 container remove 6bb6dcd358145165531dd34e1e4c82d80f8f01fd6487334320a3db9c2b103dd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_bardeen, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:54:32 np0005540825 systemd[1]: libpod-conmon-6bb6dcd358145165531dd34e1e4c82d80f8f01fd6487334320a3db9c2b103dd2.scope: Deactivated successfully.
Dec  1 04:54:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:32 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14004a90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:32 np0005540825 podman[113089]: 2025-12-01 09:54:32.390502022 +0000 UTC m=+0.055108352 container create 15fe60d70f7ac1f37f364c13b387c8dd327bfb619cb12cf18c5d22004a28926d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_brown, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:54:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:54:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:54:32.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:54:32 np0005540825 systemd[1]: Started libpod-conmon-15fe60d70f7ac1f37f364c13b387c8dd327bfb619cb12cf18c5d22004a28926d.scope.
Dec  1 04:54:32 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:54:32 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b6b333a3c13e6c3b2f1a48aabaa6bcb90530c79e55e7dbdd2a3eed3bb8520cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 04:54:32 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b6b333a3c13e6c3b2f1a48aabaa6bcb90530c79e55e7dbdd2a3eed3bb8520cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:54:32 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b6b333a3c13e6c3b2f1a48aabaa6bcb90530c79e55e7dbdd2a3eed3bb8520cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:54:32 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b6b333a3c13e6c3b2f1a48aabaa6bcb90530c79e55e7dbdd2a3eed3bb8520cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 04:54:32 np0005540825 podman[113089]: 2025-12-01 09:54:32.366796913 +0000 UTC m=+0.031403273 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:54:32 np0005540825 podman[113089]: 2025-12-01 09:54:32.480379284 +0000 UTC m=+0.144985624 container init 15fe60d70f7ac1f37f364c13b387c8dd327bfb619cb12cf18c5d22004a28926d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_brown, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:54:32 np0005540825 podman[113089]: 2025-12-01 09:54:32.487214695 +0000 UTC m=+0.151821035 container start 15fe60d70f7ac1f37f364c13b387c8dd327bfb619cb12cf18c5d22004a28926d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  1 04:54:32 np0005540825 podman[113089]: 2025-12-01 09:54:32.491450617 +0000 UTC m=+0.156057157 container attach 15fe60d70f7ac1f37f364c13b387c8dd327bfb619cb12cf18c5d22004a28926d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_brown, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:54:32 np0005540825 python3.9[113083]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  1 04:54:32 np0005540825 romantic_brown[113106]: {
Dec  1 04:54:32 np0005540825 romantic_brown[113106]:    "1": [
Dec  1 04:54:32 np0005540825 romantic_brown[113106]:        {
Dec  1 04:54:32 np0005540825 romantic_brown[113106]:            "devices": [
Dec  1 04:54:32 np0005540825 romantic_brown[113106]:                "/dev/loop3"
Dec  1 04:54:32 np0005540825 romantic_brown[113106]:            ],
Dec  1 04:54:32 np0005540825 romantic_brown[113106]:            "lv_name": "ceph_lv0",
Dec  1 04:54:32 np0005540825 romantic_brown[113106]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 04:54:32 np0005540825 romantic_brown[113106]:            "lv_size": "21470642176",
Dec  1 04:54:32 np0005540825 romantic_brown[113106]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 04:54:32 np0005540825 romantic_brown[113106]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 04:54:32 np0005540825 romantic_brown[113106]:            "name": "ceph_lv0",
Dec  1 04:54:32 np0005540825 romantic_brown[113106]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 04:54:32 np0005540825 romantic_brown[113106]:            "tags": {
Dec  1 04:54:32 np0005540825 romantic_brown[113106]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 04:54:32 np0005540825 romantic_brown[113106]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 04:54:32 np0005540825 romantic_brown[113106]:                "ceph.cephx_lockbox_secret": "",
Dec  1 04:54:32 np0005540825 romantic_brown[113106]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 04:54:32 np0005540825 romantic_brown[113106]:                "ceph.cluster_name": "ceph",
Dec  1 04:54:32 np0005540825 romantic_brown[113106]:                "ceph.crush_device_class": "",
Dec  1 04:54:32 np0005540825 romantic_brown[113106]:                "ceph.encrypted": "0",
Dec  1 04:54:32 np0005540825 romantic_brown[113106]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 04:54:32 np0005540825 romantic_brown[113106]:                "ceph.osd_id": "1",
Dec  1 04:54:32 np0005540825 romantic_brown[113106]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 04:54:32 np0005540825 romantic_brown[113106]:                "ceph.type": "block",
Dec  1 04:54:32 np0005540825 romantic_brown[113106]:                "ceph.vdo": "0",
Dec  1 04:54:32 np0005540825 romantic_brown[113106]:                "ceph.with_tpm": "0"
Dec  1 04:54:32 np0005540825 romantic_brown[113106]:            },
Dec  1 04:54:32 np0005540825 romantic_brown[113106]:            "type": "block",
Dec  1 04:54:32 np0005540825 romantic_brown[113106]:            "vg_name": "ceph_vg0"
Dec  1 04:54:32 np0005540825 romantic_brown[113106]:        }
Dec  1 04:54:32 np0005540825 romantic_brown[113106]:    ]
Dec  1 04:54:32 np0005540825 romantic_brown[113106]: }
Dec  1 04:54:32 np0005540825 systemd[1]: libpod-15fe60d70f7ac1f37f364c13b387c8dd327bfb619cb12cf18c5d22004a28926d.scope: Deactivated successfully.
Dec  1 04:54:32 np0005540825 podman[113089]: 2025-12-01 09:54:32.777409767 +0000 UTC m=+0.442016117 container died 15fe60d70f7ac1f37f364c13b387c8dd327bfb619cb12cf18c5d22004a28926d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_brown, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:54:32 np0005540825 systemd[1]: var-lib-containers-storage-overlay-7b6b333a3c13e6c3b2f1a48aabaa6bcb90530c79e55e7dbdd2a3eed3bb8520cb-merged.mount: Deactivated successfully.
Dec  1 04:54:32 np0005540825 podman[113089]: 2025-12-01 09:54:32.843034026 +0000 UTC m=+0.507640366 container remove 15fe60d70f7ac1f37f364c13b387c8dd327bfb619cb12cf18c5d22004a28926d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:54:32 np0005540825 systemd[1]: libpod-conmon-15fe60d70f7ac1f37f364c13b387c8dd327bfb619cb12cf18c5d22004a28926d.scope: Deactivated successfully.
Dec  1 04:54:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:33 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad04003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:33 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v98: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:54:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:54:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:54:33.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:54:33 np0005540825 podman[113218]: 2025-12-01 09:54:33.487191209 +0000 UTC m=+0.047029656 container create 01b670088f2ed0c4c68b9c5c1edca9a35391881fa3b771d114f27956f04ab8e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_cartwright, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  1 04:54:33 np0005540825 systemd[1]: Started libpod-conmon-01b670088f2ed0c4c68b9c5c1edca9a35391881fa3b771d114f27956f04ab8e3.scope.
Dec  1 04:54:33 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:54:33 np0005540825 podman[113218]: 2025-12-01 09:54:33.468765392 +0000 UTC m=+0.028603869 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:54:33 np0005540825 podman[113218]: 2025-12-01 09:54:33.577997486 +0000 UTC m=+0.137835953 container init 01b670088f2ed0c4c68b9c5c1edca9a35391881fa3b771d114f27956f04ab8e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_cartwright, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1)
Dec  1 04:54:33 np0005540825 podman[113218]: 2025-12-01 09:54:33.587384975 +0000 UTC m=+0.147223462 container start 01b670088f2ed0c4c68b9c5c1edca9a35391881fa3b771d114f27956f04ab8e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  1 04:54:33 np0005540825 reverent_cartwright[113234]: 167 167
Dec  1 04:54:33 np0005540825 systemd[1]: libpod-01b670088f2ed0c4c68b9c5c1edca9a35391881fa3b771d114f27956f04ab8e3.scope: Deactivated successfully.
Dec  1 04:54:33 np0005540825 podman[113218]: 2025-12-01 09:54:33.593108177 +0000 UTC m=+0.152946654 container attach 01b670088f2ed0c4c68b9c5c1edca9a35391881fa3b771d114f27956f04ab8e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:54:33 np0005540825 podman[113218]: 2025-12-01 09:54:33.59626593 +0000 UTC m=+0.156104377 container died 01b670088f2ed0c4c68b9c5c1edca9a35391881fa3b771d114f27956f04ab8e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_cartwright, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:54:33 np0005540825 systemd[1]: var-lib-containers-storage-overlay-d30e7646fa2c70f38e536555849a390ac10125253d9db0129892a2168daa40b6-merged.mount: Deactivated successfully.
Dec  1 04:54:33 np0005540825 podman[113218]: 2025-12-01 09:54:33.635835699 +0000 UTC m=+0.195674146 container remove 01b670088f2ed0c4c68b9c5c1edca9a35391881fa3b771d114f27956f04ab8e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Dec  1 04:54:33 np0005540825 systemd[1]: libpod-conmon-01b670088f2ed0c4c68b9c5c1edca9a35391881fa3b771d114f27956f04ab8e3.scope: Deactivated successfully.
Dec  1 04:54:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:33 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad340041e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:33 np0005540825 podman[113258]: 2025-12-01 09:54:33.817622528 +0000 UTC m=+0.057366932 container create 520232662f3300c384af0043937b968ff25a603dac8c6e462ae6bb87bd6c3c56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:54:33 np0005540825 systemd[1]: Started libpod-conmon-520232662f3300c384af0043937b968ff25a603dac8c6e462ae6bb87bd6c3c56.scope.
Dec  1 04:54:33 np0005540825 podman[113258]: 2025-12-01 09:54:33.800230467 +0000 UTC m=+0.039974881 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:54:33 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:54:33 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d226b17accd0d480939b693aa010ddb639852ce7fa65df63b41cbdaf01a92a72/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 04:54:33 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d226b17accd0d480939b693aa010ddb639852ce7fa65df63b41cbdaf01a92a72/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:54:33 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d226b17accd0d480939b693aa010ddb639852ce7fa65df63b41cbdaf01a92a72/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:54:33 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d226b17accd0d480939b693aa010ddb639852ce7fa65df63b41cbdaf01a92a72/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 04:54:33 np0005540825 podman[113258]: 2025-12-01 09:54:33.920154385 +0000 UTC m=+0.159898849 container init 520232662f3300c384af0043937b968ff25a603dac8c6e462ae6bb87bd6c3c56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_colden, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  1 04:54:33 np0005540825 podman[113258]: 2025-12-01 09:54:33.932055321 +0000 UTC m=+0.171799725 container start 520232662f3300c384af0043937b968ff25a603dac8c6e462ae6bb87bd6c3c56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_colden, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:54:33 np0005540825 podman[113258]: 2025-12-01 09:54:33.936488648 +0000 UTC m=+0.176233152 container attach 520232662f3300c384af0043937b968ff25a603dac8c6e462ae6bb87bd6c3c56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  1 04:54:33 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:54:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:34 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20004120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:54:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:54:34.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:54:34 np0005540825 lvm[113502]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 04:54:34 np0005540825 lvm[113502]: VG ceph_vg0 finished
Dec  1 04:54:34 np0005540825 lucid_colden[113293]: {}
Dec  1 04:54:34 np0005540825 systemd[1]: libpod-520232662f3300c384af0043937b968ff25a603dac8c6e462ae6bb87bd6c3c56.scope: Deactivated successfully.
Dec  1 04:54:34 np0005540825 systemd[1]: libpod-520232662f3300c384af0043937b968ff25a603dac8c6e462ae6bb87bd6c3c56.scope: Consumed 1.358s CPU time.
Dec  1 04:54:34 np0005540825 podman[113258]: 2025-12-01 09:54:34.728091981 +0000 UTC m=+0.967836385 container died 520232662f3300c384af0043937b968ff25a603dac8c6e462ae6bb87bd6c3c56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_colden, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  1 04:54:34 np0005540825 systemd[1]: var-lib-containers-storage-overlay-d226b17accd0d480939b693aa010ddb639852ce7fa65df63b41cbdaf01a92a72-merged.mount: Deactivated successfully.
Dec  1 04:54:34 np0005540825 podman[113258]: 2025-12-01 09:54:34.780381837 +0000 UTC m=+1.020126241 container remove 520232662f3300c384af0043937b968ff25a603dac8c6e462ae6bb87bd6c3c56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  1 04:54:34 np0005540825 systemd[1]: libpod-conmon-520232662f3300c384af0043937b968ff25a603dac8c6e462ae6bb87bd6c3c56.scope: Deactivated successfully.
Dec  1 04:54:34 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:54:34 np0005540825 python3.9[113500]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 04:54:34 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:54:34 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:54:34 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:54:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:35 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:35 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v99: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:54:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 04:54:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:54:35.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 04:54:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:35 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad04003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:35 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:54:35 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:54:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:36 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad340041e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.002000053s ======
Dec  1 04:54:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:54:36.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec  1 04:54:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:54:36.972Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 04:54:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:54:36.973Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:54:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:37 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20004120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:37 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v100: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  1 04:54:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:54:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:54:37.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:54:37 np0005540825 python3.9[113722]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  1 04:54:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:37 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:38 np0005540825 python3.9[113876]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:54:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:38 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad04003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:54:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:54:38.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:54:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:54:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:39 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad04003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v101: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:54:39 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Dec  1 04:54:39 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:54:39.352975) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  1 04:54:39 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Dec  1 04:54:39 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764582879353667, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 2340, "num_deletes": 252, "total_data_size": 6619264, "memory_usage": 6847728, "flush_reason": "Manual Compaction"}
Dec  1 04:54:39 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_09:54:39
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['.nfs', 'default.rgw.control', 'backups', 'images', '.rgw.root', 'default.rgw.log', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', '.mgr']
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 04:54:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:54:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:54:39.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:54:39 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764582879405087, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 6164538, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8618, "largest_seqno": 10957, "table_properties": {"data_size": 6153332, "index_size": 7252, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2885, "raw_key_size": 24268, "raw_average_key_size": 21, "raw_value_size": 6130188, "raw_average_value_size": 5330, "num_data_blocks": 321, "num_entries": 1150, "num_filter_entries": 1150, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764582754, "oldest_key_time": 1764582754, "file_creation_time": 1764582879, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Dec  1 04:54:39 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 51765 microseconds, and 20042 cpu microseconds.
Dec  1 04:54:39 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 04:54:39 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:54:39.405362) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 6164538 bytes OK
Dec  1 04:54:39 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:54:39.405402) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Dec  1 04:54:39 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:54:39.406977) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Dec  1 04:54:39 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:54:39.407008) EVENT_LOG_v1 {"time_micros": 1764582879406997, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  1 04:54:39 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:54:39.407035) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  1 04:54:39 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 6608860, prev total WAL file size 6608860, number of live WAL files 2.
Dec  1 04:54:39 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 04:54:39 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:54:39.409710) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Dec  1 04:54:39 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  1 04:54:39 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(6020KB)], [23(12MB)]
Dec  1 04:54:39 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764582879409804, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 19199590, "oldest_snapshot_seqno": -1}
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 04:54:39 np0005540825 python3.9[114030]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 04:54:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 04:54:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 04:54:39 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 4128 keys, 14810029 bytes, temperature: kUnknown
Dec  1 04:54:39 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764582879539521, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 14810029, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14776524, "index_size": 22067, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10373, "raw_key_size": 105224, "raw_average_key_size": 25, "raw_value_size": 14695087, "raw_average_value_size": 3559, "num_data_blocks": 945, "num_entries": 4128, "num_filter_entries": 4128, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764582410, "oldest_key_time": 0, "file_creation_time": 1764582879, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Dec  1 04:54:39 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 04:54:39 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:54:39.539843) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 14810029 bytes
Dec  1 04:54:39 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:54:39.541488) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.9 rd, 114.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(5.9, 12.4 +0.0 blob) out(14.1 +0.0 blob), read-write-amplify(5.5) write-amplify(2.4) OK, records in: 4664, records dropped: 536 output_compression: NoCompression
Dec  1 04:54:39 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:54:39.541530) EVENT_LOG_v1 {"time_micros": 1764582879541511, "job": 8, "event": "compaction_finished", "compaction_time_micros": 129818, "compaction_time_cpu_micros": 55545, "output_level": 6, "num_output_files": 1, "total_output_size": 14810029, "num_input_records": 4664, "num_output_records": 4128, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  1 04:54:39 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 04:54:39 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764582879546681, "job": 8, "event": "table_file_deletion", "file_number": 25}
Dec  1 04:54:39 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 04:54:39 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764582879550718, "job": 8, "event": "table_file_deletion", "file_number": 23}
Dec  1 04:54:39 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:54:39.409624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 04:54:39 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:54:39.550929) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 04:54:39 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:54:39.550937) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 04:54:39 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:54:39.550940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 04:54:39 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:54:39.550943) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 04:54:39 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:54:39.550946) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 04:54:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 04:54:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:39 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad340041e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:40 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:40 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad10001840 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 04:54:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:54:40.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 04:54:40 np0005540825 python3.9[114181]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:54:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:41 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20004120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:41 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v102: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  1 04:54:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:54:41] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Dec  1 04:54:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:54:41] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Dec  1 04:54:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:54:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:54:41.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:54:41 np0005540825 python3.9[114341]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 04:54:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:41 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad04004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:42 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:42 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20004120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:54:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:54:42.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:54:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:43 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad340041e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:43 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v103: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:54:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:54:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:54:43.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:54:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:43 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad10001840 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:43 np0005540825 python3.9[114496]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:54:43 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:54:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:44 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad04004020 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:54:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:54:44.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:54:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:45 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20004120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:45 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v104: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:54:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:54:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:54:45.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:54:45 np0005540825 python3.9[114785]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  1 04:54:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:45 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad340041e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:46 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad10001840 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:46 np0005540825 python3.9[114935]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:54:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:54:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:54:46.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:54:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:54:46.974Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 04:54:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:54:46.974Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:54:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:47 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad04004040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:47 np0005540825 python3.9[115089]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 04:54:47 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v105: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  1 04:54:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:54:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:54:47.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:54:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:47 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20004120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:48 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad340041e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 04:54:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:54:48.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 04:54:48 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:54:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:49 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad10001840 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:49 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v106: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:54:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:54:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:54:49.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:54:49 np0005540825 python3.9[115246]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 04:54:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:49 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad04004060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:50 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:50 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20004120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:54:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:54:50.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:54:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:51 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad340041e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=infra.usagestats t=2025-12-01T09:54:51.214558021Z level=info msg="Usage stats are ready to report"
Dec  1 04:54:51 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v107: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  1 04:54:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:54:51] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Dec  1 04:54:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:54:51] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Dec  1 04:54:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:54:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:54:51.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:54:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:51 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad10001840 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:51 np0005540825 python3.9[115401]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:54:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:52 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad04004080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:54:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:54:52.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:54:52 np0005540825 python3.9[115555]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Dec  1 04:54:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/095453 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 04:54:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:53 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20004120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:53 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v108: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  1 04:54:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:54:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:54:53.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:54:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:53 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad340041e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:53 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:54:54 np0005540825 systemd[1]: session-40.scope: Deactivated successfully.
Dec  1 04:54:54 np0005540825 systemd[1]: session-40.scope: Consumed 19.188s CPU time.
Dec  1 04:54:54 np0005540825 systemd-logind[789]: Session 40 logged out. Waiting for processes to exit.
Dec  1 04:54:54 np0005540825 systemd-logind[789]: Removed session 40.
Dec  1 04:54:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:54 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad10003880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:54:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:54:54.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:54:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 04:54:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 04:54:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:55 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad040040a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:55 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v109: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  1 04:54:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 04:54:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:54:55.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 04:54:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:55 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20004120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:56 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad340041e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:54:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:54:56.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:54:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:54:56.975Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:54:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:57 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad10003880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:57 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v110: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:54:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:54:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:54:57.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:54:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:57 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad040040c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:58 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20004120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:54:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:54:58.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:54:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:54:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:59 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad340041e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:54:59 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v111: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  1 04:54:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:54:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:54:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:54:59.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:54:59 np0005540825 systemd-logind[789]: New session 41 of user zuul.
Dec  1 04:54:59 np0005540825 systemd[1]: Started Session 41 of User zuul.
Dec  1 04:54:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:54:59 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad10003880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:00 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad040040e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 04:55:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:55:00.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 04:55:00 np0005540825 python3.9[115766]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:55:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:01 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20004120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:01 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v112: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Dec  1 04:55:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:55:01] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Dec  1 04:55:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:55:01] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Dec  1 04:55:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:55:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:55:01.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:55:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:01 : epoch 692d650f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 04:55:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:01 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad340041e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:01 np0005540825 python3.9[115922]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 04:55:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:02 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad10003880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:55:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:55:02.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:55:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:03 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad10003880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:03 np0005540825 python3.9[116115]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:55:03 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v113: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec  1 04:55:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:55:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:55:03.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:55:03 np0005540825 systemd[1]: session-41.scope: Deactivated successfully.
Dec  1 04:55:03 np0005540825 systemd[1]: session-41.scope: Consumed 2.824s CPU time.
Dec  1 04:55:03 np0005540825 systemd-logind[789]: Session 41 logged out. Waiting for processes to exit.
Dec  1 04:55:03 np0005540825 systemd-logind[789]: Removed session 41.
Dec  1 04:55:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:03 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20004120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:03 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:55:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:04 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad340041e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:55:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:55:04.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:55:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:04 : epoch 692d650f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 04:55:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:04 : epoch 692d650f : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 04:55:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:05 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad10003880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:05 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v114: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec  1 04:55:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:55:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:55:05.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:55:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:05 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad10003880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:06 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20004120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:55:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:55:06.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:55:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:55:06.976Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 04:55:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:55:06.977Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 04:55:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:07 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad340041e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:07 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v115: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec  1 04:55:07 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:07 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:55:07 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:55:07.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:55:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:07 : epoch 692d650f : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  1 04:55:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:07 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad04004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:08 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad10003880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:55:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:55:08.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:55:08 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:55:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:09 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20004120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:09 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v116: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Dec  1 04:55:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:55:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:55:09.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:55:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 04:55:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 04:55:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:55:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:55:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:55:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:55:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:55:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:55:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:09 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad340041e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:10 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad04004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:55:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:55:10.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:55:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:11 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad04004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:11 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v117: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  1 04:55:11 np0005540825 systemd-logind[789]: New session 42 of user zuul.
Dec  1 04:55:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:55:11] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Dec  1 04:55:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:55:11] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Dec  1 04:55:11 np0005540825 systemd[1]: Started Session 42 of User zuul.
Dec  1 04:55:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:55:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:55:11.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:55:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:11 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20004120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:12 np0005540825 python3.9[116309]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:55:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:12 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c001e60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:55:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:55:12.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:55:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/095513 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  1 04:55:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:13 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:13 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v118: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Dec  1 04:55:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:55:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:55:13.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:55:13 np0005540825 python3.9[116464]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:55:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:13 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad040041a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:13 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:55:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:14 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20004120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:55:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:55:14.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:55:14 np0005540825 python3.9[116621]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 04:55:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:15 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c001e60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:15 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v119: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Dec  1 04:55:15 np0005540825 python3.9[116706]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 04:55:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:55:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:55:15.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:55:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:15 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:16 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad040041c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:55:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:55:16.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:55:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:55:16.978Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 04:55:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:55:16.983Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:55:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:17 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20004120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:17 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v120: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Dec  1 04:55:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:55:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:55:17.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:55:17 np0005540825 python3.9[116887]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 04:55:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:17 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c001e60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:18 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 04:55:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:55:18.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 04:55:18 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:55:19 np0005540825 python3.9[117082]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:55:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:19 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad040041e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:19 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v121: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec  1 04:55:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.002000055s ======
Dec  1 04:55:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:55:19.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000055s
Dec  1 04:55:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:19 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20004120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:20 np0005540825 python3.9[117236]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:55:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:20 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c001e60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 04:55:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:55:20.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 04:55:21 np0005540825 python3.9[117401]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:55:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:21 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:21 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v122: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec  1 04:55:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:55:21] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Dec  1 04:55:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:55:21] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Dec  1 04:55:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:55:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:55:21.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:55:21 np0005540825 python3.9[117481]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:55:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:21 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad04004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:22 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:22 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20004120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 04:55:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:55:22.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 04:55:22 np0005540825 python3.9[117633]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:55:23 np0005540825 python3.9[117711]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:55:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:23 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c003990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:23 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v123: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec  1 04:55:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 04:55:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:55:23.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 04:55:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:23 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:23 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:55:24 np0005540825 python3.9[117865]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:55:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:24 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad04004220 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:55:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:55:24.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:55:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 04:55:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 04:55:24 np0005540825 python3.9[118017]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:55:25 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:25 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20004120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:25 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v124: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec  1 04:55:25 np0005540825 python3.9[118170]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:55:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:55:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:55:25.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:55:25 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:25 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c003990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:26 np0005540825 python3.9[118323]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:55:26 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:26 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:55:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:55:26.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:55:26 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:55:26.983Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 04:55:26 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:55:26.984Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 04:55:26 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:55:26.985Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:55:27 np0005540825 python3.9[118475]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 04:55:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:27 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad04004240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:27 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v125: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec  1 04:55:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.002000055s ======
Dec  1 04:55:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:55:27.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000055s
Dec  1 04:55:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:27 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad20004120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:28 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad2c003990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:55:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:55:28.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:55:28 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:55:29 np0005540825 systemd[90983]: Created slice User Background Tasks Slice.
Dec  1 04:55:29 np0005540825 systemd[90983]: Starting Cleanup of User's Temporary Files and Directories...
Dec  1 04:55:29 np0005540825 systemd[90983]: Finished Cleanup of User's Temporary Files and Directories.
Dec  1 04:55:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:29 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad14001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:29 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v126: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:55:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:55:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:55:29.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:55:29 np0005540825 python3.9[118633]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:55:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[96091]: 01/12/2025 09:55:29 : epoch 692d650f : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad04004260 fd 48 proxy ignored for local
Dec  1 04:55:29 np0005540825 kernel: ganesha.nfsd[105707]: segfault at 50 ip 00007fade708732e sp 00007fad9effc210 error 4 in libntirpc.so.5.8[7fade706c000+2c000] likely on CPU 0 (core 0, socket 0)
Dec  1 04:55:29 np0005540825 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec  1 04:55:29 np0005540825 systemd[1]: Created slice Slice /system/systemd-coredump.
Dec  1 04:55:29 np0005540825 systemd[1]: Started Process Core Dump (PID 118660/UID 0).
Dec  1 04:55:30 np0005540825 python3.9[118789]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:55:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:55:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:55:30.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:55:31 np0005540825 systemd-coredump[118661]: Process 96095 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 63:#012#0  0x00007fade708732e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Dec  1 04:55:31 np0005540825 systemd[1]: systemd-coredump@0-118660-0.service: Deactivated successfully.
Dec  1 04:55:31 np0005540825 systemd[1]: systemd-coredump@0-118660-0.service: Consumed 1.266s CPU time.
Dec  1 04:55:31 np0005540825 podman[118947]: 2025-12-01 09:55:31.210984987 +0000 UTC m=+0.042137525 container died 385d0b8a0770a5cfcc609cc2d998a61d24533494ce0bce025dda1e75042f6acf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:55:31 np0005540825 systemd[1]: var-lib-containers-storage-overlay-5630288d60feffebc6f30f0a5d2221ded4dfcbd43b00316a73953fb6ddb69b29-merged.mount: Deactivated successfully.
Dec  1 04:55:31 np0005540825 podman[118947]: 2025-12-01 09:55:31.260726489 +0000 UTC m=+0.091879007 container remove 385d0b8a0770a5cfcc609cc2d998a61d24533494ce0bce025dda1e75042f6acf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:55:31 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Main process exited, code=exited, status=139/n/a
Dec  1 04:55:31 np0005540825 python3.9[118943]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:55:31 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v127: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  1 04:55:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:55:31] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Dec  1 04:55:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:55:31] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Dec  1 04:55:31 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Failed with result 'exit-code'.
Dec  1 04:55:31 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Consumed 2.104s CPU time.
Dec  1 04:55:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:55:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:55:31.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:55:32 np0005540825 python3.9[119142]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:55:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:55:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:55:32.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:55:33 np0005540825 python3.9[119295]: ansible-service_facts Invoked
Dec  1 04:55:33 np0005540825 network[119313]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  1 04:55:33 np0005540825 network[119314]: 'network-scripts' will be removed from distribution in near future.
Dec  1 04:55:33 np0005540825 network[119315]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  1 04:55:33 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v128: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:55:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:55:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:55:33.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:55:33 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:55:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:55:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:55:34.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:55:35 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v129: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:55:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:55:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:55:35.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:55:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/095535 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 04:55:35 np0005540825 podman[119474]: 2025-12-01 09:55:35.865523554 +0000 UTC m=+0.069821443 container exec 04e54403a63b389bbbec1024baf07fe083822adf8debfd9c927a52a06f70b8a1 (image=quay.io/ceph/ceph:v19, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  1 04:55:35 np0005540825 podman[119474]: 2025-12-01 09:55:35.959887297 +0000 UTC m=+0.164185196 container exec_died 04e54403a63b389bbbec1024baf07fe083822adf8debfd9c927a52a06f70b8a1 (image=quay.io/ceph/ceph:v19, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mon-compute-0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  1 04:55:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 04:55:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:55:36.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 04:55:36 np0005540825 podman[119596]: 2025-12-01 09:55:36.515817077 +0000 UTC m=+0.064403375 container exec 6f6cf01cf4add71c311676e9908aca30b90b94b7eb4eed46b57a6078721d520f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:55:36 np0005540825 podman[119596]: 2025-12-01 09:55:36.525671416 +0000 UTC m=+0.074257714 container exec_died 6f6cf01cf4add71c311676e9908aca30b90b94b7eb4eed46b57a6078721d520f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:55:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:55:36.985Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:55:37 np0005540825 podman[119780]: 2025-12-01 09:55:37.135206624 +0000 UTC m=+0.102494017 container exec 0ce6b28b78cdc773acbae8987038033199adf9f2d08be5b101f663b41bdbf569 (image=quay.io/ceph/haproxy:2.3, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd)
Dec  1 04:55:37 np0005540825 podman[119780]: 2025-12-01 09:55:37.171755844 +0000 UTC m=+0.139043217 container exec_died 0ce6b28b78cdc773acbae8987038033199adf9f2d08be5b101f663b41bdbf569 (image=quay.io/ceph/haproxy:2.3, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd)
Dec  1 04:55:37 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v130: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  1 04:55:37 np0005540825 podman[119858]: 2025-12-01 09:55:37.387684756 +0000 UTC m=+0.056973181 container exec a5bc912f6140365e8fac95a046d1f1cd854ca55aaf2d1e10454f7fa95d0346ac (image=quay.io/ceph/keepalived:2.2.4, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-nfs-cephfs-compute-0-gzwexr, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, description=keepalived for Ceph, release=1793, architecture=x86_64, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4)
Dec  1 04:55:37 np0005540825 podman[119858]: 2025-12-01 09:55:37.403712565 +0000 UTC m=+0.073000990 container exec_died a5bc912f6140365e8fac95a046d1f1cd854ca55aaf2d1e10454f7fa95d0346ac (image=quay.io/ceph/keepalived:2.2.4, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-nfs-cephfs-compute-0-gzwexr, io.buildah.version=1.28.2, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=keepalived, release=1793)
Dec  1 04:55:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:55:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:55:37.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:55:37 np0005540825 podman[119932]: 2025-12-01 09:55:37.661924174 +0000 UTC m=+0.062164203 container exec fa43ac72a8a6a2863fa517cbc53fe118714aa74f1d9b620c1e40de173c893c3c (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:55:37 np0005540825 podman[119932]: 2025-12-01 09:55:37.696370927 +0000 UTC m=+0.096610926 container exec_died fa43ac72a8a6a2863fa517cbc53fe118714aa74f1d9b620c1e40de173c893c3c (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:55:37 np0005540825 podman[120015]: 2025-12-01 09:55:37.921872309 +0000 UTC m=+0.051060508 container exec 2e1200771a4f85a610f0f173c3c2000346e63d85e37d815d1d1db9886b52c917 (image=quay.io/ceph/grafana:10.4.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 04:55:38 np0005540825 podman[120015]: 2025-12-01 09:55:38.071934417 +0000 UTC m=+0.201122616 container exec_died 2e1200771a4f85a610f0f173c3c2000346e63d85e37d815d1d1db9886b52c917 (image=quay.io/ceph/grafana:10.4.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 04:55:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:55:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:55:38.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:55:38 np0005540825 podman[120163]: 2025-12-01 09:55:38.536455195 +0000 UTC m=+0.083113227 container exec f4d1dfb280c04c299aa8be4743fa19bf2fe3a6e302067b3bdeba477b91d1a552 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:55:38 np0005540825 podman[120163]: 2025-12-01 09:55:38.579787091 +0000 UTC m=+0.126445103 container exec_died f4d1dfb280c04c299aa8be4743fa19bf2fe3a6e302067b3bdeba477b91d1a552 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 04:55:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:55:38 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:55:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:55:38 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:55:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:55:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:55:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:55:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 04:55:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v131: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v132: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s
Dec  1 04:55:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 04:55:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:55:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 04:55:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:55:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 04:55:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 04:55:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 04:55:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 04:55:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:55:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_09:55:39
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', '.nfs', '.rgw.root', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'volumes', 'vms']
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 04:55:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:55:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:55:39.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:55:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 04:55:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 04:55:39 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:55:39 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:55:39 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 04:55:39 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:55:39 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:55:39 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 04:55:39 np0005540825 ceph-mon[74416]: log_channel(cluster) log [WRN] : Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 04:55:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 04:55:39 np0005540825 podman[120593]: 2025-12-01 09:55:39.857260954 +0000 UTC m=+0.047929853 container create ff202c6385990e7e83692fba24fafaeb3cc376daa25d0b99e4e419f20d8cb44b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_murdock, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  1 04:55:39 np0005540825 systemd[1]: Started libpod-conmon-ff202c6385990e7e83692fba24fafaeb3cc376daa25d0b99e4e419f20d8cb44b.scope.
Dec  1 04:55:39 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:55:39 np0005540825 podman[120593]: 2025-12-01 09:55:39.837930925 +0000 UTC m=+0.028599834 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:55:39 np0005540825 podman[120593]: 2025-12-01 09:55:39.95171245 +0000 UTC m=+0.142381429 container init ff202c6385990e7e83692fba24fafaeb3cc376daa25d0b99e4e419f20d8cb44b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_murdock, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:55:39 np0005540825 podman[120593]: 2025-12-01 09:55:39.962449674 +0000 UTC m=+0.153118593 container start ff202c6385990e7e83692fba24fafaeb3cc376daa25d0b99e4e419f20d8cb44b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:55:39 np0005540825 podman[120593]: 2025-12-01 09:55:39.966845364 +0000 UTC m=+0.157514283 container attach ff202c6385990e7e83692fba24fafaeb3cc376daa25d0b99e4e419f20d8cb44b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_murdock, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  1 04:55:39 np0005540825 musing_murdock[120637]: 167 167
Dec  1 04:55:39 np0005540825 systemd[1]: libpod-ff202c6385990e7e83692fba24fafaeb3cc376daa25d0b99e4e419f20d8cb44b.scope: Deactivated successfully.
Dec  1 04:55:39 np0005540825 podman[120593]: 2025-12-01 09:55:39.96851876 +0000 UTC m=+0.159187719 container died ff202c6385990e7e83692fba24fafaeb3cc376daa25d0b99e4e419f20d8cb44b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_murdock, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  1 04:55:40 np0005540825 systemd[1]: var-lib-containers-storage-overlay-0805805f42f1ef04375b089041c18e293226cd507b616ce690d356a5cf682283-merged.mount: Deactivated successfully.
Dec  1 04:55:40 np0005540825 podman[120593]: 2025-12-01 09:55:40.024629566 +0000 UTC m=+0.215298485 container remove ff202c6385990e7e83692fba24fafaeb3cc376daa25d0b99e4e419f20d8cb44b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_murdock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  1 04:55:40 np0005540825 systemd[1]: libpod-conmon-ff202c6385990e7e83692fba24fafaeb3cc376daa25d0b99e4e419f20d8cb44b.scope: Deactivated successfully.
Dec  1 04:55:40 np0005540825 podman[120681]: 2025-12-01 09:55:40.219904122 +0000 UTC m=+0.049541267 container create db1172f1657ca89c3689f044bdfdebab6a6ef9b1ffb6878680d3c0852fbea747 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_blackburn, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:55:40 np0005540825 systemd[1]: Started libpod-conmon-db1172f1657ca89c3689f044bdfdebab6a6ef9b1ffb6878680d3c0852fbea747.scope.
Dec  1 04:55:40 np0005540825 podman[120681]: 2025-12-01 09:55:40.196633005 +0000 UTC m=+0.026270230 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:55:40 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:55:40 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daebe7ad4fcd447bf9cc35cd623af7f93a39e86c60127636636b494d7a601474/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 04:55:40 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daebe7ad4fcd447bf9cc35cd623af7f93a39e86c60127636636b494d7a601474/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:55:40 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daebe7ad4fcd447bf9cc35cd623af7f93a39e86c60127636636b494d7a601474/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:55:40 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daebe7ad4fcd447bf9cc35cd623af7f93a39e86c60127636636b494d7a601474/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 04:55:40 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daebe7ad4fcd447bf9cc35cd623af7f93a39e86c60127636636b494d7a601474/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 04:55:40 np0005540825 podman[120681]: 2025-12-01 09:55:40.315588762 +0000 UTC m=+0.145225947 container init db1172f1657ca89c3689f044bdfdebab6a6ef9b1ffb6878680d3c0852fbea747 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  1 04:55:40 np0005540825 podman[120681]: 2025-12-01 09:55:40.33268297 +0000 UTC m=+0.162320135 container start db1172f1657ca89c3689f044bdfdebab6a6ef9b1ffb6878680d3c0852fbea747 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec  1 04:55:40 np0005540825 podman[120681]: 2025-12-01 09:55:40.337471831 +0000 UTC m=+0.167108996 container attach db1172f1657ca89c3689f044bdfdebab6a6ef9b1ffb6878680d3c0852fbea747 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:55:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:55:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:55:40.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:55:40 np0005540825 python3.9[120758]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 04:55:40 np0005540825 happy_blackburn[120725]: --> passed data devices: 0 physical, 1 LVM
Dec  1 04:55:40 np0005540825 happy_blackburn[120725]: --> All data devices are unavailable
Dec  1 04:55:40 np0005540825 systemd[1]: libpod-db1172f1657ca89c3689f044bdfdebab6a6ef9b1ffb6878680d3c0852fbea747.scope: Deactivated successfully.
Dec  1 04:55:40 np0005540825 podman[120681]: 2025-12-01 09:55:40.68335158 +0000 UTC m=+0.512988735 container died db1172f1657ca89c3689f044bdfdebab6a6ef9b1ffb6878680d3c0852fbea747 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:55:40 np0005540825 ceph-mon[74416]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Dec  1 04:55:40 np0005540825 systemd[1]: var-lib-containers-storage-overlay-daebe7ad4fcd447bf9cc35cd623af7f93a39e86c60127636636b494d7a601474-merged.mount: Deactivated successfully.
Dec  1 04:55:40 np0005540825 podman[120681]: 2025-12-01 09:55:40.743449796 +0000 UTC m=+0.573086951 container remove db1172f1657ca89c3689f044bdfdebab6a6ef9b1ffb6878680d3c0852fbea747 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_blackburn, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec  1 04:55:40 np0005540825 systemd[1]: libpod-conmon-db1172f1657ca89c3689f044bdfdebab6a6ef9b1ffb6878680d3c0852fbea747.scope: Deactivated successfully.
Dec  1 04:55:41 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v133: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 0 op/s
Dec  1 04:55:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:55:41] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Dec  1 04:55:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:55:41] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Dec  1 04:55:41 np0005540825 podman[120871]: 2025-12-01 09:55:41.404282837 +0000 UTC m=+0.054533804 container create b7ddab622f99a5629f17624b6de25b5c5e6fc4060eda2eab832307609e47251c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  1 04:55:41 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Scheduled restart job, restart counter is at 1.
Dec  1 04:55:41 np0005540825 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.pytvsu for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 04:55:41 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Consumed 2.104s CPU time.
Dec  1 04:55:41 np0005540825 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.pytvsu for 365f19c2-81e5-5edd-b6b4-280555214d3a...
Dec  1 04:55:41 np0005540825 systemd[1]: Started libpod-conmon-b7ddab622f99a5629f17624b6de25b5c5e6fc4060eda2eab832307609e47251c.scope.
Dec  1 04:55:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:55:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:55:41.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:55:41 np0005540825 podman[120871]: 2025-12-01 09:55:41.379432157 +0000 UTC m=+0.029683164 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:55:41 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:55:41 np0005540825 podman[120871]: 2025-12-01 09:55:41.508757766 +0000 UTC m=+0.159008773 container init b7ddab622f99a5629f17624b6de25b5c5e6fc4060eda2eab832307609e47251c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_kepler, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:55:41 np0005540825 podman[120871]: 2025-12-01 09:55:41.520004384 +0000 UTC m=+0.170255341 container start b7ddab622f99a5629f17624b6de25b5c5e6fc4060eda2eab832307609e47251c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:55:41 np0005540825 podman[120871]: 2025-12-01 09:55:41.524859147 +0000 UTC m=+0.175110154 container attach b7ddab622f99a5629f17624b6de25b5c5e6fc4060eda2eab832307609e47251c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_kepler, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:55:41 np0005540825 hardcore_kepler[120889]: 167 167
Dec  1 04:55:41 np0005540825 systemd[1]: libpod-b7ddab622f99a5629f17624b6de25b5c5e6fc4060eda2eab832307609e47251c.scope: Deactivated successfully.
Dec  1 04:55:41 np0005540825 podman[120871]: 2025-12-01 09:55:41.528019234 +0000 UTC m=+0.178270191 container died b7ddab622f99a5629f17624b6de25b5c5e6fc4060eda2eab832307609e47251c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_kepler, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Dec  1 04:55:41 np0005540825 systemd[1]: var-lib-containers-storage-overlay-263500671c4d5b588133b504b016e895d9a3439f29e5567fc31c9570e6e1bf0a-merged.mount: Deactivated successfully.
Dec  1 04:55:41 np0005540825 podman[120871]: 2025-12-01 09:55:41.578105795 +0000 UTC m=+0.228356762 container remove b7ddab622f99a5629f17624b6de25b5c5e6fc4060eda2eab832307609e47251c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_kepler, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  1 04:55:41 np0005540825 systemd[1]: libpod-conmon-b7ddab622f99a5629f17624b6de25b5c5e6fc4060eda2eab832307609e47251c.scope: Deactivated successfully.
Dec  1 04:55:41 np0005540825 podman[120950]: 2025-12-01 09:55:41.714557341 +0000 UTC m=+0.060657662 container create 33ed98ad02f00f5f0d532f872f221422a74604fcda0145c21446c63d6c695acc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:55:41 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94cfaf6288f6c4899cc4e5b6e424dd2321ab0aeb7e2b1768c4d87ad70acba807/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec  1 04:55:41 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94cfaf6288f6c4899cc4e5b6e424dd2321ab0aeb7e2b1768c4d87ad70acba807/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:55:41 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94cfaf6288f6c4899cc4e5b6e424dd2321ab0aeb7e2b1768c4d87ad70acba807/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 04:55:41 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94cfaf6288f6c4899cc4e5b6e424dd2321ab0aeb7e2b1768c4d87ad70acba807/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.pytvsu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 04:55:41 np0005540825 podman[120966]: 2025-12-01 09:55:41.776694711 +0000 UTC m=+0.051058948 container create ffcfd178687c00e4bdce1978dd3100b3c94ef63f8f2a029747cae070020d4e28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_vaughan, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  1 04:55:41 np0005540825 podman[120950]: 2025-12-01 09:55:41.691243702 +0000 UTC m=+0.037344043 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:55:41 np0005540825 podman[120950]: 2025-12-01 09:55:41.785700478 +0000 UTC m=+0.131800799 container init 33ed98ad02f00f5f0d532f872f221422a74604fcda0145c21446c63d6c695acc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  1 04:55:41 np0005540825 podman[120950]: 2025-12-01 09:55:41.79198073 +0000 UTC m=+0.138081031 container start 33ed98ad02f00f5f0d532f872f221422a74604fcda0145c21446c63d6c695acc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  1 04:55:41 np0005540825 bash[120950]: 33ed98ad02f00f5f0d532f872f221422a74604fcda0145c21446c63d6c695acc
Dec  1 04:55:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:41 : epoch 692d661d : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec  1 04:55:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:41 : epoch 692d661d : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec  1 04:55:41 np0005540825 systemd[1]: Started libpod-conmon-ffcfd178687c00e4bdce1978dd3100b3c94ef63f8f2a029747cae070020d4e28.scope.
Dec  1 04:55:41 np0005540825 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.pytvsu for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 04:55:41 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:55:41 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/232bbc98adc76f0b20acfed08d54e11f4d7543c7af149cb5ff5fa246d329e4dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 04:55:41 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/232bbc98adc76f0b20acfed08d54e11f4d7543c7af149cb5ff5fa246d329e4dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:55:41 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/232bbc98adc76f0b20acfed08d54e11f4d7543c7af149cb5ff5fa246d329e4dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:55:41 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/232bbc98adc76f0b20acfed08d54e11f4d7543c7af149cb5ff5fa246d329e4dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 04:55:41 np0005540825 podman[120966]: 2025-12-01 09:55:41.755266485 +0000 UTC m=+0.029630632 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:55:41 np0005540825 podman[120966]: 2025-12-01 09:55:41.86210125 +0000 UTC m=+0.136465407 container init ffcfd178687c00e4bdce1978dd3100b3c94ef63f8f2a029747cae070020d4e28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_vaughan, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec  1 04:55:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:41 : epoch 692d661d : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec  1 04:55:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:41 : epoch 692d661d : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec  1 04:55:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:41 : epoch 692d661d : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec  1 04:55:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:41 : epoch 692d661d : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec  1 04:55:41 np0005540825 podman[120966]: 2025-12-01 09:55:41.871684932 +0000 UTC m=+0.146049049 container start ffcfd178687c00e4bdce1978dd3100b3c94ef63f8f2a029747cae070020d4e28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 04:55:41 np0005540825 podman[120966]: 2025-12-01 09:55:41.875434265 +0000 UTC m=+0.149798402 container attach ffcfd178687c00e4bdce1978dd3100b3c94ef63f8f2a029747cae070020d4e28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_vaughan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:55:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:41 : epoch 692d661d : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec  1 04:55:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:41 : epoch 692d661d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 04:55:42 np0005540825 hopeful_vaughan[120990]: {
Dec  1 04:55:42 np0005540825 hopeful_vaughan[120990]:    "1": [
Dec  1 04:55:42 np0005540825 hopeful_vaughan[120990]:        {
Dec  1 04:55:42 np0005540825 hopeful_vaughan[120990]:            "devices": [
Dec  1 04:55:42 np0005540825 hopeful_vaughan[120990]:                "/dev/loop3"
Dec  1 04:55:42 np0005540825 hopeful_vaughan[120990]:            ],
Dec  1 04:55:42 np0005540825 hopeful_vaughan[120990]:            "lv_name": "ceph_lv0",
Dec  1 04:55:42 np0005540825 hopeful_vaughan[120990]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 04:55:42 np0005540825 hopeful_vaughan[120990]:            "lv_size": "21470642176",
Dec  1 04:55:42 np0005540825 hopeful_vaughan[120990]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 04:55:42 np0005540825 hopeful_vaughan[120990]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 04:55:42 np0005540825 hopeful_vaughan[120990]:            "name": "ceph_lv0",
Dec  1 04:55:42 np0005540825 hopeful_vaughan[120990]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 04:55:42 np0005540825 hopeful_vaughan[120990]:            "tags": {
Dec  1 04:55:42 np0005540825 hopeful_vaughan[120990]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 04:55:42 np0005540825 hopeful_vaughan[120990]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 04:55:42 np0005540825 hopeful_vaughan[120990]:                "ceph.cephx_lockbox_secret": "",
Dec  1 04:55:42 np0005540825 hopeful_vaughan[120990]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 04:55:42 np0005540825 hopeful_vaughan[120990]:                "ceph.cluster_name": "ceph",
Dec  1 04:55:42 np0005540825 hopeful_vaughan[120990]:                "ceph.crush_device_class": "",
Dec  1 04:55:42 np0005540825 hopeful_vaughan[120990]:                "ceph.encrypted": "0",
Dec  1 04:55:42 np0005540825 hopeful_vaughan[120990]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 04:55:42 np0005540825 hopeful_vaughan[120990]:                "ceph.osd_id": "1",
Dec  1 04:55:42 np0005540825 hopeful_vaughan[120990]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 04:55:42 np0005540825 hopeful_vaughan[120990]:                "ceph.type": "block",
Dec  1 04:55:42 np0005540825 hopeful_vaughan[120990]:                "ceph.vdo": "0",
Dec  1 04:55:42 np0005540825 hopeful_vaughan[120990]:                "ceph.with_tpm": "0"
Dec  1 04:55:42 np0005540825 hopeful_vaughan[120990]:            },
Dec  1 04:55:42 np0005540825 hopeful_vaughan[120990]:            "type": "block",
Dec  1 04:55:42 np0005540825 hopeful_vaughan[120990]:            "vg_name": "ceph_vg0"
Dec  1 04:55:42 np0005540825 hopeful_vaughan[120990]:        }
Dec  1 04:55:42 np0005540825 hopeful_vaughan[120990]:    ]
Dec  1 04:55:42 np0005540825 hopeful_vaughan[120990]: }
Dec  1 04:55:42 np0005540825 podman[120966]: 2025-12-01 09:55:42.186695316 +0000 UTC m=+0.461059433 container died ffcfd178687c00e4bdce1978dd3100b3c94ef63f8f2a029747cae070020d4e28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True)
Dec  1 04:55:42 np0005540825 systemd[1]: libpod-ffcfd178687c00e4bdce1978dd3100b3c94ef63f8f2a029747cae070020d4e28.scope: Deactivated successfully.
Dec  1 04:55:42 np0005540825 systemd[1]: var-lib-containers-storage-overlay-232bbc98adc76f0b20acfed08d54e11f4d7543c7af149cb5ff5fa246d329e4dc-merged.mount: Deactivated successfully.
Dec  1 04:55:42 np0005540825 podman[120966]: 2025-12-01 09:55:42.227705269 +0000 UTC m=+0.502069386 container remove ffcfd178687c00e4bdce1978dd3100b3c94ef63f8f2a029747cae070020d4e28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_vaughan, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec  1 04:55:42 np0005540825 systemd[1]: libpod-conmon-ffcfd178687c00e4bdce1978dd3100b3c94ef63f8f2a029747cae070020d4e28.scope: Deactivated successfully.
Dec  1 04:55:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:55:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:55:42.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:55:42 np0005540825 podman[121213]: 2025-12-01 09:55:42.816780376 +0000 UTC m=+0.045192398 container create f7fec4f31c0f95f5d15f098531f06c626f38e05e3486d243526adae5686a95aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:55:42 np0005540825 systemd[1]: Started libpod-conmon-f7fec4f31c0f95f5d15f098531f06c626f38e05e3486d243526adae5686a95aa.scope.
Dec  1 04:55:42 np0005540825 podman[121213]: 2025-12-01 09:55:42.796566053 +0000 UTC m=+0.024978095 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:55:42 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:55:42 np0005540825 podman[121213]: 2025-12-01 09:55:42.916848166 +0000 UTC m=+0.145260288 container init f7fec4f31c0f95f5d15f098531f06c626f38e05e3486d243526adae5686a95aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_raman, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  1 04:55:42 np0005540825 podman[121213]: 2025-12-01 09:55:42.930604902 +0000 UTC m=+0.159016964 container start f7fec4f31c0f95f5d15f098531f06c626f38e05e3486d243526adae5686a95aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec  1 04:55:42 np0005540825 jovial_raman[121229]: 167 167
Dec  1 04:55:42 np0005540825 systemd[1]: libpod-f7fec4f31c0f95f5d15f098531f06c626f38e05e3486d243526adae5686a95aa.scope: Deactivated successfully.
Dec  1 04:55:42 np0005540825 podman[121213]: 2025-12-01 09:55:42.935184278 +0000 UTC m=+0.163596390 container attach f7fec4f31c0f95f5d15f098531f06c626f38e05e3486d243526adae5686a95aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_raman, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  1 04:55:42 np0005540825 podman[121213]: 2025-12-01 09:55:42.93637729 +0000 UTC m=+0.164789362 container died f7fec4f31c0f95f5d15f098531f06c626f38e05e3486d243526adae5686a95aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:55:42 np0005540825 systemd[1]: var-lib-containers-storage-overlay-aaa75908b2845051b70820b413a89e3eb62c52576c774856c8cd91d5c979ca89-merged.mount: Deactivated successfully.
Dec  1 04:55:42 np0005540825 podman[121213]: 2025-12-01 09:55:42.980480248 +0000 UTC m=+0.208892280 container remove f7fec4f31c0f95f5d15f098531f06c626f38e05e3486d243526adae5686a95aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_raman, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  1 04:55:43 np0005540825 systemd[1]: libpod-conmon-f7fec4f31c0f95f5d15f098531f06c626f38e05e3486d243526adae5686a95aa.scope: Deactivated successfully.
Dec  1 04:55:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/095543 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 04:55:43 np0005540825 podman[121302]: 2025-12-01 09:55:43.2022687 +0000 UTC m=+0.049204668 container create 96af12003de93a61f2644e9801be2af76b34be8604e74c36f2f0f4dc310d0888 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_fermi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Dec  1 04:55:43 np0005540825 systemd[1]: Started libpod-conmon-96af12003de93a61f2644e9801be2af76b34be8604e74c36f2f0f4dc310d0888.scope.
Dec  1 04:55:43 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:55:43 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac7881c4230ae6666446dd7ddd32607799881454df67eaab0fd75af076161f99/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 04:55:43 np0005540825 podman[121302]: 2025-12-01 09:55:43.183377592 +0000 UTC m=+0.030313560 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:55:43 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac7881c4230ae6666446dd7ddd32607799881454df67eaab0fd75af076161f99/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:55:43 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac7881c4230ae6666446dd7ddd32607799881454df67eaab0fd75af076161f99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:55:43 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac7881c4230ae6666446dd7ddd32607799881454df67eaab0fd75af076161f99/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 04:55:43 np0005540825 podman[121302]: 2025-12-01 09:55:43.290246348 +0000 UTC m=+0.137182346 container init 96af12003de93a61f2644e9801be2af76b34be8604e74c36f2f0f4dc310d0888 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_fermi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:55:43 np0005540825 podman[121302]: 2025-12-01 09:55:43.297094346 +0000 UTC m=+0.144030294 container start 96af12003de93a61f2644e9801be2af76b34be8604e74c36f2f0f4dc310d0888 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  1 04:55:43 np0005540825 podman[121302]: 2025-12-01 09:55:43.300594962 +0000 UTC m=+0.147530910 container attach 96af12003de93a61f2644e9801be2af76b34be8604e74c36f2f0f4dc310d0888 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_fermi, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:55:43 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v134: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 0 op/s
Dec  1 04:55:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:55:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:55:43.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:55:43 np0005540825 python3.9[121341]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec  1 04:55:43 np0005540825 lvm[121444]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 04:55:43 np0005540825 lvm[121444]: VG ceph_vg0 finished
Dec  1 04:55:43 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:55:44 np0005540825 elastic_fermi[121345]: {}
Dec  1 04:55:44 np0005540825 systemd[1]: libpod-96af12003de93a61f2644e9801be2af76b34be8604e74c36f2f0f4dc310d0888.scope: Deactivated successfully.
Dec  1 04:55:44 np0005540825 podman[121302]: 2025-12-01 09:55:44.045471824 +0000 UTC m=+0.892407792 container died 96af12003de93a61f2644e9801be2af76b34be8604e74c36f2f0f4dc310d0888 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_fermi, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True)
Dec  1 04:55:44 np0005540825 systemd[1]: libpod-96af12003de93a61f2644e9801be2af76b34be8604e74c36f2f0f4dc310d0888.scope: Consumed 1.138s CPU time.
Dec  1 04:55:44 np0005540825 systemd[1]: var-lib-containers-storage-overlay-ac7881c4230ae6666446dd7ddd32607799881454df67eaab0fd75af076161f99-merged.mount: Deactivated successfully.
Dec  1 04:55:44 np0005540825 podman[121302]: 2025-12-01 09:55:44.09717927 +0000 UTC m=+0.944115218 container remove 96af12003de93a61f2644e9801be2af76b34be8604e74c36f2f0f4dc310d0888 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_fermi, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:55:44 np0005540825 systemd[1]: libpod-conmon-96af12003de93a61f2644e9801be2af76b34be8604e74c36f2f0f4dc310d0888.scope: Deactivated successfully.
Dec  1 04:55:44 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:55:44 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:55:44 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:55:44 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:55:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:55:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:55:44.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:55:44 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:55:44 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:55:45 np0005540825 python3.9[121610]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:55:45 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v135: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 0 op/s
Dec  1 04:55:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:55:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:55:45.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:55:45 np0005540825 python3.9[121690]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:55:46 np0005540825 python3.9[121842]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:55:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:55:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:55:46.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:55:46 np0005540825 python3.9[121920]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:55:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:55:46.988Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 04:55:47 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v136: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 511 B/s wr, 1 op/s
Dec  1 04:55:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:55:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:55:47.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:55:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:47 : epoch 692d661d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 04:55:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:47 : epoch 692d661d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 04:55:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:47 : epoch 692d661d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 04:55:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:55:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:55:48.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:55:48 np0005540825 python3.9[122074]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:55:48 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:55:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:49 : epoch 692d661d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 04:55:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:49 : epoch 692d661d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 04:55:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:49 : epoch 692d661d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 04:55:49 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v137: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 511 B/s wr, 1 op/s
Dec  1 04:55:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 04:55:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:55:49.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 04:55:50 np0005540825 python3.9[122228]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 04:55:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:55:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:55:50.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:55:51 np0005540825 python3.9[122312]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:55:51 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v138: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Dec  1 04:55:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:55:51] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Dec  1 04:55:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:55:51] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Dec  1 04:55:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:55:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:55:51.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:55:52 np0005540825 systemd[1]: session-42.scope: Deactivated successfully.
Dec  1 04:55:52 np0005540825 systemd[1]: session-42.scope: Consumed 26.831s CPU time.
Dec  1 04:55:52 np0005540825 systemd-logind[789]: Session 42 logged out. Waiting for processes to exit.
Dec  1 04:55:52 np0005540825 systemd-logind[789]: Removed session 42.
Dec  1 04:55:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:55:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:55:52.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:55:53 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v139: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Dec  1 04:55:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:55:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:55:53.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:55:53 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:55:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Dec  1 04:55:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:55:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 04:55:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 04:55:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:55:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:55:54.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:55:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:55 : epoch 692d661d : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  1 04:55:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:55 : epoch 692d661d : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec  1 04:55:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:55 : epoch 692d661d : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec  1 04:55:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:55 : epoch 692d661d : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec  1 04:55:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:55 : epoch 692d661d : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec  1 04:55:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:55 : epoch 692d661d : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec  1 04:55:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:55 : epoch 692d661d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec  1 04:55:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:55 : epoch 692d661d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  1 04:55:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:55 : epoch 692d661d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  1 04:55:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:55 : epoch 692d661d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  1 04:55:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:55 : epoch 692d661d : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec  1 04:55:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:55 : epoch 692d661d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  1 04:55:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:55 : epoch 692d661d : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec  1 04:55:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:55 : epoch 692d661d : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec  1 04:55:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:55 : epoch 692d661d : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec  1 04:55:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:55 : epoch 692d661d : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec  1 04:55:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:55 : epoch 692d661d : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec  1 04:55:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:55 : epoch 692d661d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec  1 04:55:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:55 : epoch 692d661d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec  1 04:55:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:55 : epoch 692d661d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec  1 04:55:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:55 : epoch 692d661d : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec  1 04:55:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:55 : epoch 692d661d : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec  1 04:55:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:55 : epoch 692d661d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec  1 04:55:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:55 : epoch 692d661d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec  1 04:55:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:55 : epoch 692d661d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  1 04:55:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:55 : epoch 692d661d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec  1 04:55:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:55 : epoch 692d661d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  1 04:55:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:55 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:55 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v140: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Dec  1 04:55:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:55:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:55:55.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:55:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:55 : epoch 692d661d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 04:55:55 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:55:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:55 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11b80016c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:56 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:55:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:55:56.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:55:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:55:56.991Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 04:55:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:55:56.992Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 04:55:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:55:56.992Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 04:55:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:57 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:57 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v141: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 4 op/s
Dec  1 04:55:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:55:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:55:57.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:55:57 np0005540825 systemd-logind[789]: New session 43 of user zuul.
Dec  1 04:55:57 np0005540825 systemd[1]: Started Session 43 of User zuul.
Dec  1 04:55:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/095557 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  1 04:55:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:57 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:58 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11b80016c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:55:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:55:58.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:55:58 np0005540825 python3.9[122542]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:55:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:58 : epoch 692d661d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 04:55:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:58 : epoch 692d661d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 04:55:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:55:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:59 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a80016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:59 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v142: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 597 B/s wr, 2 op/s
Dec  1 04:55:59 np0005540825 python3.9[122696]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:55:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:55:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:55:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:55:59.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:55:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:55:59 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a00016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:55:59 np0005540825 python3.9[122774]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:56:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:00 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:56:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:56:00.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:56:00 np0005540825 systemd[1]: session-43.scope: Deactivated successfully.
Dec  1 04:56:00 np0005540825 systemd[1]: session-43.scope: Consumed 1.738s CPU time.
Dec  1 04:56:00 np0005540825 systemd-logind[789]: Session 43 logged out. Waiting for processes to exit.
Dec  1 04:56:00 np0005540825 systemd-logind[789]: Removed session 43.
Dec  1 04:56:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:01 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11b80016c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:01 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v143: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Dec  1 04:56:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:56:01] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Dec  1 04:56:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:56:01] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Dec  1 04:56:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:56:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:56:01.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:56:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:01 : epoch 692d661d : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  1 04:56:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:01 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a80016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:02 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a00016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:56:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:56:02.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:56:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:03 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc0089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:03 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v144: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  1 04:56:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:56:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:56:03.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:56:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:03 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11b80016c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:03 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:56:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:04 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a80016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:56:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:56:04.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:56:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/095605 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  1 04:56:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:05 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a00016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:05 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v145: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  1 04:56:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:56:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:56:05.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:56:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:05 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc0089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:06 np0005540825 systemd-logind[789]: New session 44 of user zuul.
Dec  1 04:56:06 np0005540825 systemd[1]: Started Session 44 of User zuul.
Dec  1 04:56:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:06 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11b80016c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:56:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:56:06.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:56:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:56:06.993Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:56:07 np0005540825 python3.9[122958]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:56:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:07 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:07 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v146: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  1 04:56:07 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:07 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 04:56:07 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:56:07.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 04:56:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:07 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:08 np0005540825 python3.9[123116]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:56:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:08 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc0096e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:56:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:56:08.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:56:08 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:56:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:09 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11b80016c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:09 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v147: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 767 B/s wr, 2 op/s
Dec  1 04:56:09 np0005540825 python3.9[123292]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:56:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 04:56:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 04:56:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:56:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:56:09.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:56:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:56:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:56:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:56:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:56:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:56:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:56:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:09 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:09 np0005540825 python3.9[123371]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.wdy8m7to recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:56:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:10 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:56:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:56:10.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:56:10 np0005540825 python3.9[123523]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:56:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:11 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc0096e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:11 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v148: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 767 B/s wr, 2 op/s
Dec  1 04:56:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:56:11] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Dec  1 04:56:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:56:11] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Dec  1 04:56:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:56:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:56:11.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:56:11 np0005540825 python3.9[123603]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.x1807uhk recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:56:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:11 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11b80016c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:12 np0005540825 python3.9[123755]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:56:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:12 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:56:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:56:12.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:56:13 np0005540825 python3.9[123907]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:56:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:13 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:13 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v149: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Dec  1 04:56:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:56:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:56:13.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:56:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:13 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc0096e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:13 np0005540825 python3.9[123987]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:56:13 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:56:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:14 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11b80016c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:56:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:56:14.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:56:14 np0005540825 python3.9[124139]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:56:15 np0005540825 python3.9[124217]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:56:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:15 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:15 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v150: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Dec  1 04:56:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:56:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:56:15.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:56:15 np0005540825 python3.9[124371]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:56:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:15 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:16 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:56:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:56:16.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:56:16 np0005540825 python3.9[124523]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:56:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:56:16.996Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:56:17 np0005540825 python3.9[124601]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:56:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:17 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11b80016c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:17 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v151: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec  1 04:56:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:56:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:56:17.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:56:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:17 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:17 np0005540825 python3.9[124780]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:56:18 np0005540825 python3.9[124858]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:56:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:18 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:56:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:56:18.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:56:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/095618 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 04:56:18 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:56:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:19 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:19 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v152: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:56:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:56:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:56:19.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:56:19 np0005540825 python3.9[125011]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:56:19 np0005540825 systemd[1]: Reloading.
Dec  1 04:56:19 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:56:19 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:56:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:19 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11b80016c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:20 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:56:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:56:20.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:56:20 np0005540825 python3.9[125201]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:56:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:21 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:21 np0005540825 python3.9[125280]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:56:21 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v153: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:56:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:56:21] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Dec  1 04:56:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:56:21] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Dec  1 04:56:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 04:56:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:56:21.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 04:56:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:21 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:22 np0005540825 python3.9[125433]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:56:22 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:22 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11b8003780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:56:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:56:22.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:56:22 np0005540825 python3.9[125511]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:56:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:23 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:23 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v154: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:56:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:56:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:56:23.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:56:23 np0005540825 python3.9[125664]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:56:23 np0005540825 systemd[1]: Reloading.
Dec  1 04:56:23 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:56:23 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:56:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:23 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:23 np0005540825 systemd[1]: Starting Create netns directory...
Dec  1 04:56:23 np0005540825 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  1 04:56:23 np0005540825 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  1 04:56:23 np0005540825 systemd[1]: Finished Create netns directory.
Dec  1 04:56:23 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:56:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:24 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 04:56:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 04:56:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:56:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:56:24.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:56:24 np0005540825 python3.9[125856]: ansible-ansible.builtin.service_facts Invoked
Dec  1 04:56:24 np0005540825 network[125873]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  1 04:56:24 np0005540825 network[125874]: 'network-scripts' will be removed from distribution in near future.
Dec  1 04:56:24 np0005540825 network[125875]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  1 04:56:25 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:25 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11b8003780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:25 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v155: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:56:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:56:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:56:25.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:56:25 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:25 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:26 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:26 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:56:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:56:26.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:56:26 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:56:26.997Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 04:56:26 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:56:26.997Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 04:56:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:27 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:27 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v156: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 85 B/s wr, 0 op/s
Dec  1 04:56:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:56:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:56:27.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:56:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:27 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:28 : epoch 692d661d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 04:56:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:28 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:56:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:56:28.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:56:28 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:56:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:29 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:29 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v157: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec  1 04:56:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:56:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:56:29.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:56:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:29 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:30 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:30 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:56:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:56:30.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:56:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:31 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11940016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:31 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v158: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec  1 04:56:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:31 : epoch 692d661d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 04:56:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:31 : epoch 692d661d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 04:56:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:56:31] "GET /metrics HTTP/1.1" 200 48414 "" "Prometheus/2.51.0"
Dec  1 04:56:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:56:31] "GET /metrics HTTP/1.1" 200 48414 "" "Prometheus/2.51.0"
Dec  1 04:56:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:56:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:56:31.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:56:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:31 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:32 np0005540825 python3.9[126147]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:56:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:32 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c40027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:56:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:56:32.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:56:32 np0005540825 python3.9[126225]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:56:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:33 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11940016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:33 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v159: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec  1 04:56:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 04:56:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:56:33.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 04:56:33 np0005540825 python3.9[126379]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:56:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:33 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11940016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:33 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:56:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:34 : epoch 692d661d : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  1 04:56:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:34 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:56:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:56:34.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:56:34 np0005540825 python3.9[126531]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:56:35 np0005540825 python3.9[126609]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:56:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:35 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c40027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:35 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v160: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec  1 04:56:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:56:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:56:35.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:56:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:35 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11940016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:36 np0005540825 python3.9[126763]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec  1 04:56:36 np0005540825 systemd[1]: Starting Time & Date Service...
Dec  1 04:56:36 np0005540825 systemd[1]: Started Time & Date Service.
Dec  1 04:56:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:36 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11940016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:56:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:56:36.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:56:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:56:36.998Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 04:56:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:56:36.999Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 04:56:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:37 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:37 np0005540825 python3.9[126924]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:56:37 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v161: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  1 04:56:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:56:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:56:37.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:56:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:37 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c40034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:38 np0005540825 python3.9[127098]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:56:38 np0005540825 python3.9[127176]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:56:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:38 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11940016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:56:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:56:38.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:56:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:56:39 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Dec  1 04:56:39 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:56:39.012987) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  1 04:56:39 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Dec  1 04:56:39 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764582999013045, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1209, "num_deletes": 250, "total_data_size": 2255168, "memory_usage": 2311464, "flush_reason": "Manual Compaction"}
Dec  1 04:56:39 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Dec  1 04:56:39 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764582999024290, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 1331906, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10959, "largest_seqno": 12166, "table_properties": {"data_size": 1327527, "index_size": 1903, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11125, "raw_average_key_size": 20, "raw_value_size": 1318060, "raw_average_value_size": 2370, "num_data_blocks": 84, "num_entries": 556, "num_filter_entries": 556, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764582879, "oldest_key_time": 1764582879, "file_creation_time": 1764582999, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Dec  1 04:56:39 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 11374 microseconds, and 5188 cpu microseconds.
Dec  1 04:56:39 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 04:56:39 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:56:39.024358) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 1331906 bytes OK
Dec  1 04:56:39 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:56:39.024385) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Dec  1 04:56:39 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:56:39.026283) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Dec  1 04:56:39 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:56:39.026298) EVENT_LOG_v1 {"time_micros": 1764582999026293, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  1 04:56:39 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:56:39.026333) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  1 04:56:39 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 2249887, prev total WAL file size 2249887, number of live WAL files 2.
Dec  1 04:56:39 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 04:56:39 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:56:39.026983) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Dec  1 04:56:39 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  1 04:56:39 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(1300KB)], [26(14MB)]
Dec  1 04:56:39 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764582999027105, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 16141935, "oldest_snapshot_seqno": -1}
Dec  1 04:56:39 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 4222 keys, 13855363 bytes, temperature: kUnknown
Dec  1 04:56:39 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764582999114438, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 13855363, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13823541, "index_size": 20192, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10565, "raw_key_size": 107542, "raw_average_key_size": 25, "raw_value_size": 13742676, "raw_average_value_size": 3255, "num_data_blocks": 863, "num_entries": 4222, "num_filter_entries": 4222, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764582410, "oldest_key_time": 0, "file_creation_time": 1764582999, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Dec  1 04:56:39 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 04:56:39 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:56:39.114717) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 13855363 bytes
Dec  1 04:56:39 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:56:39.116088) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 184.7 rd, 158.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 14.1 +0.0 blob) out(13.2 +0.0 blob), read-write-amplify(22.5) write-amplify(10.4) OK, records in: 4684, records dropped: 462 output_compression: NoCompression
Dec  1 04:56:39 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:56:39.116110) EVENT_LOG_v1 {"time_micros": 1764582999116099, "job": 10, "event": "compaction_finished", "compaction_time_micros": 87404, "compaction_time_cpu_micros": 36015, "output_level": 6, "num_output_files": 1, "total_output_size": 13855363, "num_input_records": 4684, "num_output_records": 4222, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  1 04:56:39 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 04:56:39 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764582999116503, "job": 10, "event": "table_file_deletion", "file_number": 28}
Dec  1 04:56:39 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 04:56:39 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764582999119784, "job": 10, "event": "table_file_deletion", "file_number": 26}
Dec  1 04:56:39 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:56:39.026847) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 04:56:39 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:56:39.119844) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 04:56:39 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:56:39.119852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 04:56:39 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:56:39.119856) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 04:56:39 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:56:39.119859) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 04:56:39 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:56:39.119861) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 04:56:39 np0005540825 python3.9[127328]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:56:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:39 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v162: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_09:56:39
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['vms', 'volumes', 'default.rgw.control', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', '.nfs', 'default.rgw.log', 'backups', 'images', 'cephfs.cephfs.data']
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 04:56:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 04:56:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:56:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 04:56:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:56:39.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:56:39 np0005540825 python3.9[127408]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.3ipzy1aq recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 04:56:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 04:56:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:39 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:40 np0005540825 python3.9[127561]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:56:40 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:40 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:56:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:56:40.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:56:40 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/095640 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  1 04:56:40 np0005540825 python3.9[127639]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:56:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:41 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:41 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v163: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec  1 04:56:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:56:41] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Dec  1 04:56:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:56:41] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Dec  1 04:56:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 04:56:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:56:41.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 04:56:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:41 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:41 np0005540825 python3.9[127793]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:56:42 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:42 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:56:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:56:42.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:56:42 np0005540825 python3[127946]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  1 04:56:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:43 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:43 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v164: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec  1 04:56:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:56:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:56:43.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:56:43 np0005540825 python3.9[128100]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:56:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:43 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:44 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:56:44 np0005540825 python3.9[128178]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:56:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:44 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:56:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:56:44.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:56:44 np0005540825 python3.9[128380]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:56:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:56:45 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:56:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 04:56:45 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 04:56:45 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v165: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 431 B/s wr, 2 op/s
Dec  1 04:56:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 04:56:45 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:56:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 04:56:45 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:56:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 04:56:45 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 04:56:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 04:56:45 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 04:56:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:56:45 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:56:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:45 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:45 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 04:56:45 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:56:45 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:56:45 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 04:56:45 np0005540825 python3.9[128514]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:56:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:56:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:56:45.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:56:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:45 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:45 np0005540825 podman[128659]: 2025-12-01 09:56:45.852930126 +0000 UTC m=+0.052275664 container create 48af8caa968e5ac56f17b2947b8ff41478b9bbc09b24d79ccf9e43a3ed97253f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  1 04:56:45 np0005540825 systemd[1]: Started libpod-conmon-48af8caa968e5ac56f17b2947b8ff41478b9bbc09b24d79ccf9e43a3ed97253f.scope.
Dec  1 04:56:45 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:56:45 np0005540825 podman[128659]: 2025-12-01 09:56:45.822605745 +0000 UTC m=+0.021951393 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:56:45 np0005540825 podman[128659]: 2025-12-01 09:56:45.922588843 +0000 UTC m=+0.121934371 container init 48af8caa968e5ac56f17b2947b8ff41478b9bbc09b24d79ccf9e43a3ed97253f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  1 04:56:45 np0005540825 podman[128659]: 2025-12-01 09:56:45.934556881 +0000 UTC m=+0.133902399 container start 48af8caa968e5ac56f17b2947b8ff41478b9bbc09b24d79ccf9e43a3ed97253f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_williamson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  1 04:56:45 np0005540825 podman[128659]: 2025-12-01 09:56:45.937956394 +0000 UTC m=+0.137301912 container attach 48af8caa968e5ac56f17b2947b8ff41478b9bbc09b24d79ccf9e43a3ed97253f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  1 04:56:45 np0005540825 systemd[1]: libpod-48af8caa968e5ac56f17b2947b8ff41478b9bbc09b24d79ccf9e43a3ed97253f.scope: Deactivated successfully.
Dec  1 04:56:45 np0005540825 competent_williamson[128699]: 167 167
Dec  1 04:56:45 np0005540825 conmon[128699]: conmon 48af8caa968e5ac56f17 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-48af8caa968e5ac56f17b2947b8ff41478b9bbc09b24d79ccf9e43a3ed97253f.scope/container/memory.events
Dec  1 04:56:45 np0005540825 podman[128659]: 2025-12-01 09:56:45.941787389 +0000 UTC m=+0.141132947 container died 48af8caa968e5ac56f17b2947b8ff41478b9bbc09b24d79ccf9e43a3ed97253f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_williamson, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec  1 04:56:45 np0005540825 systemd[1]: var-lib-containers-storage-overlay-6d1e04066f07bc0e37fa09c55f5b1982b83037a542c3a3baeb9b20dc41f970f0-merged.mount: Deactivated successfully.
Dec  1 04:56:45 np0005540825 podman[128659]: 2025-12-01 09:56:45.991828121 +0000 UTC m=+0.191173679 container remove 48af8caa968e5ac56f17b2947b8ff41478b9bbc09b24d79ccf9e43a3ed97253f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_williamson, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:56:46 np0005540825 systemd[1]: libpod-conmon-48af8caa968e5ac56f17b2947b8ff41478b9bbc09b24d79ccf9e43a3ed97253f.scope: Deactivated successfully.
Dec  1 04:56:46 np0005540825 podman[128776]: 2025-12-01 09:56:46.16517806 +0000 UTC m=+0.065572357 container create 9d569859e963f694f7a35c4a184b3b8a8b46826fa70c091a37b14a3094c0145f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_noether, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:56:46 np0005540825 podman[128776]: 2025-12-01 09:56:46.13303319 +0000 UTC m=+0.033427547 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:56:46 np0005540825 systemd[1]: Started libpod-conmon-9d569859e963f694f7a35c4a184b3b8a8b46826fa70c091a37b14a3094c0145f.scope.
Dec  1 04:56:46 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:56:46 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c577738a32ac602fca912c5b1d49d00bddd1a993a442a0070b5a6740f99abbd3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 04:56:46 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c577738a32ac602fca912c5b1d49d00bddd1a993a442a0070b5a6740f99abbd3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:56:46 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c577738a32ac602fca912c5b1d49d00bddd1a993a442a0070b5a6740f99abbd3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:56:46 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c577738a32ac602fca912c5b1d49d00bddd1a993a442a0070b5a6740f99abbd3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 04:56:46 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c577738a32ac602fca912c5b1d49d00bddd1a993a442a0070b5a6740f99abbd3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 04:56:46 np0005540825 python3.9[128773]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:56:46 np0005540825 podman[128776]: 2025-12-01 09:56:46.29691277 +0000 UTC m=+0.197307117 container init 9d569859e963f694f7a35c4a184b3b8a8b46826fa70c091a37b14a3094c0145f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:56:46 np0005540825 podman[128776]: 2025-12-01 09:56:46.315686904 +0000 UTC m=+0.216081151 container start 9d569859e963f694f7a35c4a184b3b8a8b46826fa70c091a37b14a3094c0145f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_noether, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:56:46 np0005540825 podman[128776]: 2025-12-01 09:56:46.320113536 +0000 UTC m=+0.220507783 container attach 9d569859e963f694f7a35c4a184b3b8a8b46826fa70c091a37b14a3094c0145f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_noether, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Dec  1 04:56:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:46 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:56:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:56:46.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:56:46 np0005540825 epic_noether[128794]: --> passed data devices: 0 physical, 1 LVM
Dec  1 04:56:46 np0005540825 epic_noether[128794]: --> All data devices are unavailable
Dec  1 04:56:46 np0005540825 systemd[1]: libpod-9d569859e963f694f7a35c4a184b3b8a8b46826fa70c091a37b14a3094c0145f.scope: Deactivated successfully.
Dec  1 04:56:46 np0005540825 podman[128776]: 2025-12-01 09:56:46.6782808 +0000 UTC m=+0.578675087 container died 9d569859e963f694f7a35c4a184b3b8a8b46826fa70c091a37b14a3094c0145f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:56:46 np0005540825 systemd[1]: var-lib-containers-storage-overlay-c577738a32ac602fca912c5b1d49d00bddd1a993a442a0070b5a6740f99abbd3-merged.mount: Deactivated successfully.
Dec  1 04:56:46 np0005540825 podman[128776]: 2025-12-01 09:56:46.741893433 +0000 UTC m=+0.642287690 container remove 9d569859e963f694f7a35c4a184b3b8a8b46826fa70c091a37b14a3094c0145f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_noether, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  1 04:56:46 np0005540825 systemd[1]: libpod-conmon-9d569859e963f694f7a35c4a184b3b8a8b46826fa70c091a37b14a3094c0145f.scope: Deactivated successfully.
Dec  1 04:56:46 np0005540825 python3.9[128882]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:56:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:56:47.000Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:56:47 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v166: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 431 B/s wr, 2 op/s
Dec  1 04:56:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:47 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:47 np0005540825 podman[129092]: 2025-12-01 09:56:47.48192055 +0000 UTC m=+0.057289601 container create 4d0b47f801b6e03c3d601489b99c05f0fba9c5c721a2204142ff67b9b8f31e07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:56:47 np0005540825 systemd[1]: Started libpod-conmon-4d0b47f801b6e03c3d601489b99c05f0fba9c5c721a2204142ff67b9b8f31e07.scope.
Dec  1 04:56:47 np0005540825 podman[129092]: 2025-12-01 09:56:47.458437567 +0000 UTC m=+0.033806628 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:56:47 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:56:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:56:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:56:47.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:56:47 np0005540825 podman[129092]: 2025-12-01 09:56:47.588972884 +0000 UTC m=+0.164341945 container init 4d0b47f801b6e03c3d601489b99c05f0fba9c5c721a2204142ff67b9b8f31e07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_sutherland, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  1 04:56:47 np0005540825 podman[129092]: 2025-12-01 09:56:47.601020624 +0000 UTC m=+0.176389645 container start 4d0b47f801b6e03c3d601489b99c05f0fba9c5c721a2204142ff67b9b8f31e07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  1 04:56:47 np0005540825 podman[129092]: 2025-12-01 09:56:47.605068565 +0000 UTC m=+0.180437626 container attach 4d0b47f801b6e03c3d601489b99c05f0fba9c5c721a2204142ff67b9b8f31e07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True)
Dec  1 04:56:47 np0005540825 cranky_sutherland[129132]: 167 167
Dec  1 04:56:47 np0005540825 systemd[1]: libpod-4d0b47f801b6e03c3d601489b99c05f0fba9c5c721a2204142ff67b9b8f31e07.scope: Deactivated successfully.
Dec  1 04:56:47 np0005540825 podman[129092]: 2025-12-01 09:56:47.608753766 +0000 UTC m=+0.184122787 container died 4d0b47f801b6e03c3d601489b99c05f0fba9c5c721a2204142ff67b9b8f31e07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_sutherland, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  1 04:56:47 np0005540825 systemd[1]: var-lib-containers-storage-overlay-7c02dcb1309f9268737481a7f3d4af7bd71174980bf74bb4c11621fd7960df9f-merged.mount: Deactivated successfully.
Dec  1 04:56:47 np0005540825 podman[129092]: 2025-12-01 09:56:47.656589416 +0000 UTC m=+0.231958437 container remove 4d0b47f801b6e03c3d601489b99c05f0fba9c5c721a2204142ff67b9b8f31e07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  1 04:56:47 np0005540825 systemd[1]: libpod-conmon-4d0b47f801b6e03c3d601489b99c05f0fba9c5c721a2204142ff67b9b8f31e07.scope: Deactivated successfully.
Dec  1 04:56:47 np0005540825 python3.9[129167]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:56:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:47 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:47 np0005540825 podman[129185]: 2025-12-01 09:56:47.853954254 +0000 UTC m=+0.047183784 container create 963f4de142a5e3f64599c219eb4f7ce7c2c51b41fe678219a5da0f450cdfa65f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_lederberg, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:56:47 np0005540825 systemd[1]: Started libpod-conmon-963f4de142a5e3f64599c219eb4f7ce7c2c51b41fe678219a5da0f450cdfa65f.scope.
Dec  1 04:56:47 np0005540825 podman[129185]: 2025-12-01 09:56:47.831877499 +0000 UTC m=+0.025107039 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:56:47 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:56:47 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8fd85c9a8092664e38be13d4d85ff9d8b0be30f82ac241a7af9259618574fd7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 04:56:47 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8fd85c9a8092664e38be13d4d85ff9d8b0be30f82ac241a7af9259618574fd7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:56:47 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8fd85c9a8092664e38be13d4d85ff9d8b0be30f82ac241a7af9259618574fd7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:56:47 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8fd85c9a8092664e38be13d4d85ff9d8b0be30f82ac241a7af9259618574fd7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 04:56:47 np0005540825 podman[129185]: 2025-12-01 09:56:47.97497873 +0000 UTC m=+0.168208330 container init 963f4de142a5e3f64599c219eb4f7ce7c2c51b41fe678219a5da0f450cdfa65f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid)
Dec  1 04:56:47 np0005540825 podman[129185]: 2025-12-01 09:56:47.987471773 +0000 UTC m=+0.180701333 container start 963f4de142a5e3f64599c219eb4f7ce7c2c51b41fe678219a5da0f450cdfa65f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_lederberg, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  1 04:56:47 np0005540825 podman[129185]: 2025-12-01 09:56:47.992415748 +0000 UTC m=+0.185645318 container attach 963f4de142a5e3f64599c219eb4f7ce7c2c51b41fe678219a5da0f450cdfa65f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:56:48 np0005540825 python3.9[129283]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:56:48 np0005540825 elegant_lederberg[129226]: {
Dec  1 04:56:48 np0005540825 elegant_lederberg[129226]:    "1": [
Dec  1 04:56:48 np0005540825 elegant_lederberg[129226]:        {
Dec  1 04:56:48 np0005540825 elegant_lederberg[129226]:            "devices": [
Dec  1 04:56:48 np0005540825 elegant_lederberg[129226]:                "/dev/loop3"
Dec  1 04:56:48 np0005540825 elegant_lederberg[129226]:            ],
Dec  1 04:56:48 np0005540825 elegant_lederberg[129226]:            "lv_name": "ceph_lv0",
Dec  1 04:56:48 np0005540825 elegant_lederberg[129226]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 04:56:48 np0005540825 elegant_lederberg[129226]:            "lv_size": "21470642176",
Dec  1 04:56:48 np0005540825 elegant_lederberg[129226]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 04:56:48 np0005540825 elegant_lederberg[129226]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 04:56:48 np0005540825 elegant_lederberg[129226]:            "name": "ceph_lv0",
Dec  1 04:56:48 np0005540825 elegant_lederberg[129226]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 04:56:48 np0005540825 elegant_lederberg[129226]:            "tags": {
Dec  1 04:56:48 np0005540825 elegant_lederberg[129226]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 04:56:48 np0005540825 elegant_lederberg[129226]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 04:56:48 np0005540825 elegant_lederberg[129226]:                "ceph.cephx_lockbox_secret": "",
Dec  1 04:56:48 np0005540825 elegant_lederberg[129226]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 04:56:48 np0005540825 elegant_lederberg[129226]:                "ceph.cluster_name": "ceph",
Dec  1 04:56:48 np0005540825 elegant_lederberg[129226]:                "ceph.crush_device_class": "",
Dec  1 04:56:48 np0005540825 elegant_lederberg[129226]:                "ceph.encrypted": "0",
Dec  1 04:56:48 np0005540825 elegant_lederberg[129226]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 04:56:48 np0005540825 elegant_lederberg[129226]:                "ceph.osd_id": "1",
Dec  1 04:56:48 np0005540825 elegant_lederberg[129226]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 04:56:48 np0005540825 elegant_lederberg[129226]:                "ceph.type": "block",
Dec  1 04:56:48 np0005540825 elegant_lederberg[129226]:                "ceph.vdo": "0",
Dec  1 04:56:48 np0005540825 elegant_lederberg[129226]:                "ceph.with_tpm": "0"
Dec  1 04:56:48 np0005540825 elegant_lederberg[129226]:            },
Dec  1 04:56:48 np0005540825 elegant_lederberg[129226]:            "type": "block",
Dec  1 04:56:48 np0005540825 elegant_lederberg[129226]:            "vg_name": "ceph_vg0"
Dec  1 04:56:48 np0005540825 elegant_lederberg[129226]:        }
Dec  1 04:56:48 np0005540825 elegant_lederberg[129226]:    ]
Dec  1 04:56:48 np0005540825 elegant_lederberg[129226]: }
Dec  1 04:56:48 np0005540825 systemd[1]: libpod-963f4de142a5e3f64599c219eb4f7ce7c2c51b41fe678219a5da0f450cdfa65f.scope: Deactivated successfully.
Dec  1 04:56:48 np0005540825 podman[129185]: 2025-12-01 09:56:48.347685673 +0000 UTC m=+0.540915193 container died 963f4de142a5e3f64599c219eb4f7ce7c2c51b41fe678219a5da0f450cdfa65f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_lederberg, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:56:48 np0005540825 systemd[1]: var-lib-containers-storage-overlay-a8fd85c9a8092664e38be13d4d85ff9d8b0be30f82ac241a7af9259618574fd7-merged.mount: Deactivated successfully.
Dec  1 04:56:48 np0005540825 podman[129185]: 2025-12-01 09:56:48.395622536 +0000 UTC m=+0.588852056 container remove 963f4de142a5e3f64599c219eb4f7ce7c2c51b41fe678219a5da0f450cdfa65f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  1 04:56:48 np0005540825 systemd[1]: libpod-conmon-963f4de142a5e3f64599c219eb4f7ce7c2c51b41fe678219a5da0f450cdfa65f.scope: Deactivated successfully.
Dec  1 04:56:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:48 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:56:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:56:48.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:56:48 np0005540825 ceph-mgr[74709]: [dashboard INFO request] [192.168.122.100:50610] [POST] [200] [0.003s] [4.0B] [82b51c58-492b-4171-9058-69fa99d56722] /api/prometheus_receiver
Dec  1 04:56:49 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:56:49 np0005540825 podman[129541]: 2025-12-01 09:56:49.071607819 +0000 UTC m=+0.066130683 container create 81bd8cc2ab23a93fddccff6b47912a2f8ceef723566cd829bcb79fe7293b30dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_mestorf, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:56:49 np0005540825 systemd[1]: Started libpod-conmon-81bd8cc2ab23a93fddccff6b47912a2f8ceef723566cd829bcb79fe7293b30dc.scope.
Dec  1 04:56:49 np0005540825 podman[129541]: 2025-12-01 09:56:49.046604824 +0000 UTC m=+0.041127708 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:56:49 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:56:49 np0005540825 python3.9[129538]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:56:49 np0005540825 podman[129541]: 2025-12-01 09:56:49.182454346 +0000 UTC m=+0.176977270 container init 81bd8cc2ab23a93fddccff6b47912a2f8ceef723566cd829bcb79fe7293b30dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  1 04:56:49 np0005540825 podman[129541]: 2025-12-01 09:56:49.194547398 +0000 UTC m=+0.189070232 container start 81bd8cc2ab23a93fddccff6b47912a2f8ceef723566cd829bcb79fe7293b30dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_mestorf, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:56:49 np0005540825 podman[129541]: 2025-12-01 09:56:49.202061624 +0000 UTC m=+0.196584608 container attach 81bd8cc2ab23a93fddccff6b47912a2f8ceef723566cd829bcb79fe7293b30dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_mestorf, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:56:49 np0005540825 magical_mestorf[129558]: 167 167
Dec  1 04:56:49 np0005540825 systemd[1]: libpod-81bd8cc2ab23a93fddccff6b47912a2f8ceef723566cd829bcb79fe7293b30dc.scope: Deactivated successfully.
Dec  1 04:56:49 np0005540825 podman[129541]: 2025-12-01 09:56:49.204736497 +0000 UTC m=+0.199259341 container died 81bd8cc2ab23a93fddccff6b47912a2f8ceef723566cd829bcb79fe7293b30dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:56:49 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v167: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 258 B/s rd, 0 B/s wr, 0 op/s
Dec  1 04:56:49 np0005540825 systemd[1]: var-lib-containers-storage-overlay-507cf2a49eb94840da4cab8e893e4e3957abdfa8319bd244ba9b59a31ae748e3-merged.mount: Deactivated successfully.
Dec  1 04:56:49 np0005540825 podman[129541]: 2025-12-01 09:56:49.25411278 +0000 UTC m=+0.248635614 container remove 81bd8cc2ab23a93fddccff6b47912a2f8ceef723566cd829bcb79fe7293b30dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_mestorf, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:56:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:49 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:49 np0005540825 systemd[1]: libpod-conmon-81bd8cc2ab23a93fddccff6b47912a2f8ceef723566cd829bcb79fe7293b30dc.scope: Deactivated successfully.
Dec  1 04:56:49 np0005540825 podman[129626]: 2025-12-01 09:56:49.453702809 +0000 UTC m=+0.071197492 container create 4c641dd91157e5f1e8a1260019bba04cecd20269339c47d72130c82b8fd2dd20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_wilbur, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 04:56:49 np0005540825 systemd[1]: Started libpod-conmon-4c641dd91157e5f1e8a1260019bba04cecd20269339c47d72130c82b8fd2dd20.scope.
Dec  1 04:56:49 np0005540825 podman[129626]: 2025-12-01 09:56:49.423206783 +0000 UTC m=+0.040701536 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:56:49 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:56:49 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c17e9635ece97e8fdedf2c4cba4876ac0bd4c9c535634567b81d007274a52091/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 04:56:49 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c17e9635ece97e8fdedf2c4cba4876ac0bd4c9c535634567b81d007274a52091/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:56:49 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c17e9635ece97e8fdedf2c4cba4876ac0bd4c9c535634567b81d007274a52091/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:56:49 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c17e9635ece97e8fdedf2c4cba4876ac0bd4c9c535634567b81d007274a52091/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 04:56:49 np0005540825 podman[129626]: 2025-12-01 09:56:49.57420258 +0000 UTC m=+0.191697223 container init 4c641dd91157e5f1e8a1260019bba04cecd20269339c47d72130c82b8fd2dd20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec  1 04:56:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 04:56:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:56:49.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 04:56:49 np0005540825 podman[129626]: 2025-12-01 09:56:49.591633857 +0000 UTC m=+0.209128540 container start 4c641dd91157e5f1e8a1260019bba04cecd20269339c47d72130c82b8fd2dd20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  1 04:56:49 np0005540825 podman[129626]: 2025-12-01 09:56:49.598402883 +0000 UTC m=+0.215897636 container attach 4c641dd91157e5f1e8a1260019bba04cecd20269339c47d72130c82b8fd2dd20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_wilbur, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:56:49 np0005540825 python3.9[129678]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:56:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:49 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:50 np0005540825 lvm[129828]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 04:56:50 np0005540825 lvm[129828]: VG ceph_vg0 finished
Dec  1 04:56:50 np0005540825 busy_wilbur[129676]: {}
Dec  1 04:56:50 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:50 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:50 np0005540825 systemd[1]: libpod-4c641dd91157e5f1e8a1260019bba04cecd20269339c47d72130c82b8fd2dd20.scope: Deactivated successfully.
Dec  1 04:56:50 np0005540825 systemd[1]: libpod-4c641dd91157e5f1e8a1260019bba04cecd20269339c47d72130c82b8fd2dd20.scope: Consumed 1.552s CPU time.
Dec  1 04:56:50 np0005540825 podman[129626]: 2025-12-01 09:56:50.537836874 +0000 UTC m=+1.155331537 container died 4c641dd91157e5f1e8a1260019bba04cecd20269339c47d72130c82b8fd2dd20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:56:50 np0005540825 systemd[1]: var-lib-containers-storage-overlay-c17e9635ece97e8fdedf2c4cba4876ac0bd4c9c535634567b81d007274a52091-merged.mount: Deactivated successfully.
Dec  1 04:56:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:56:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:56:50.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:56:50 np0005540825 podman[129626]: 2025-12-01 09:56:50.594344502 +0000 UTC m=+1.211839155 container remove 4c641dd91157e5f1e8a1260019bba04cecd20269339c47d72130c82b8fd2dd20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_wilbur, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec  1 04:56:50 np0005540825 systemd[1]: libpod-conmon-4c641dd91157e5f1e8a1260019bba04cecd20269339c47d72130c82b8fd2dd20.scope: Deactivated successfully.
Dec  1 04:56:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:56:50 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:56:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:56:50 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:56:50 np0005540825 python3.9[129929]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:56:51 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v168: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 345 B/s rd, 0 B/s wr, 0 op/s
Dec  1 04:56:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:51 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:56:51] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Dec  1 04:56:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:56:51] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Dec  1 04:56:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:56:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:56:51.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:56:51 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:56:51 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:56:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:51 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:51 np0005540825 python3.9[130099]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:56:51 np0005540825 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 04:56:51 np0005540825 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 04:56:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:52 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  1 04:56:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:56:52.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  1 04:56:52 np0005540825 python3.9[130252]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:56:52 np0005540825 systemd[1]: session-18.scope: Deactivated successfully.
Dec  1 04:56:52 np0005540825 systemd[1]: session-18.scope: Consumed 1min 42.720s CPU time.
Dec  1 04:56:52 np0005540825 systemd-logind[789]: Session 18 logged out. Waiting for processes to exit.
Dec  1 04:56:52 np0005540825 systemd-logind[789]: Removed session 18.
Dec  1 04:56:52 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  1 04:56:52 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2531 writes, 12K keys, 2531 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s#012Cumulative WAL: 2531 writes, 2531 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2531 writes, 12K keys, 2531 commit groups, 1.0 writes per commit group, ingest: 23.60 MB, 0.04 MB/s#012Interval WAL: 2531 writes, 2531 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     88.6      0.23              0.06         5    0.045       0      0       0.0       0.0#012  L6      1/0   13.21 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   2.6    155.6    137.5      0.38              0.16         4    0.094     17K   1816       0.0       0.0#012 Sum      1/0   13.21 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.6     97.3    119.2      0.60              0.22         9    0.067     17K   1816       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.6     98.1    120.0      0.60              0.22         8    0.075     17K   1816       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   0.0    155.6    137.5      0.38              0.16         4    0.094     17K   1816       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     90.2      0.22              0.06         4    0.055       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.020, interval 0.020#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.07 GB write, 0.12 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.6 seconds#012Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563970129350#2 capacity: 304.00 MB usage: 1.42 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 5.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(90,1.25 MB,0.412364%) FilterBlock(10,58.36 KB,0.0187472%) IndexBlock(10,115.86 KB,0.0372184%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  1 04:56:53 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v169: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 258 B/s rd, 0 op/s
Dec  1 04:56:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:53 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:53 np0005540825 python3.9[130405]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:56:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 04:56:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:56:53.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 04:56:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:53 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:56:54 np0005540825 python3.9[130558]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  1 04:56:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 04:56:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 04:56:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:54 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:56:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:56:54.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:56:55 np0005540825 python3.9[130710]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  1 04:56:55 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v170: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 258 B/s rd, 0 op/s
Dec  1 04:56:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:55 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 04:56:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:56:55.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 04:56:55 np0005540825 systemd[1]: session-44.scope: Deactivated successfully.
Dec  1 04:56:55 np0005540825 systemd[1]: session-44.scope: Consumed 35.876s CPU time.
Dec  1 04:56:55 np0005540825 systemd-logind[789]: Session 44 logged out. Waiting for processes to exit.
Dec  1 04:56:55 np0005540825 systemd-logind[789]: Removed session 44.
Dec  1 04:56:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:55 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:56 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  1 04:56:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:56:56.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  1 04:56:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:56:57.004Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:56:57 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v171: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  1 04:56:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:57 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 04:56:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:56:57.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 04:56:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:57 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:58 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 04:56:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:56:58.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 04:56:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:56:58.848Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:56:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:56:59 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v172: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:56:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:59 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c0036e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:56:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:56:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:56:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:56:59.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:56:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:56:59 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:00 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:57:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:57:00.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:57:00 np0005540825 systemd-logind[789]: New session 45 of user zuul.
Dec  1 04:57:00 np0005540825 systemd[1]: Started Session 45 of User zuul.
Dec  1 04:57:01 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v173: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  1 04:57:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:01 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:57:01] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Dec  1 04:57:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:57:01] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Dec  1 04:57:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:57:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:57:01.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:57:01 np0005540825 python3.9[130923]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec  1 04:57:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:01 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c0036e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:02 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:57:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:57:02.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:57:02 np0005540825 python3.9[131075]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:57:03 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v174: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:57:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:03 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:57:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:57:03.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:57:03 np0005540825 python3.9[131231]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Dec  1 04:57:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:03 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:57:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:04 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 04:57:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:57:04.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 04:57:04 np0005540825 python3.9[131383]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.txb__jbo follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:57:05 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v175: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:57:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:05 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:05 np0005540825 python3.9[131509]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.txb__jbo mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764583024.0644066-102-10555954884770/.source.txb__jbo _original_basename=.yta2ghdn follow=False checksum=8dc09b174cc5b8debe148224e7d00f23d70f4242 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:57:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 04:57:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:57:05.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 04:57:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:05 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a5b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:06 np0005540825 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  1 04:57:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:06 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 04:57:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:57:06.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 04:57:06 np0005540825 python3.9[131662]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:57:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:57:07.005Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 04:57:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:57:07.005Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:57:07 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v176: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  1 04:57:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:07 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:07 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:07 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:57:07 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:57:07.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:57:07 np0005540825 python3.9[131818]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9iOYT2GM4L6SHZTMq11oZ+BAk/eXQ8XBJJYa2Eo/9VKQiuDMNzjXWKc1heeqMgloaJAk+En3hPDTZcnt14xKW0weSVhc1GuXBU3IqdQGeO3nyjdhUNxj2O6Syt/8Srh0+ne/yimC9BxBrCHKmwPPCx0TTtiy3n953HP5w0wedM8MI2bl9X4CaVwEtwSUbhFJgRaAVvg1jWUBV+tE9CGQXy1Y7raeATTLvRa3PIqU2pSDvvN44SuFWubkATb9CNZfejG2Tz2N709KveFa1tPaAjiuj046dUN+nb5eMroLvf2T2MoSQ12AUXHcpxVB6qb918qUpn8x9/V65c4fkXQ3nNgbF3IHP7RcwSs0XISdGLMT1NPTmYDhECjFDqTwkiK+goHUXZY3N3dYfjS9uqS1/66OIDlWK6niL0DMO6j+L/iriIIzPVWmrEz384bDc+wVQgGjmVXolCOWq/vp6TE1nAFqsNTZmQXC8BHCGtitnnWgzgbJX3D4O4dBOqHqdPr8=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGEIBRopLb4IdSGL1f5PVbv9932FzGHz/9YCDTQr6PvA#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDEJ0q084PIbFOMDxHa25lnKuVffDClzijZagkDx2W3Z17XxuTVNXMnebqlksv3x5cE8TQLF/PIAPJS87wX+Nuo=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+tytlc2ziEXCaePFL6NCHfQfG5hnoDOgK+/O6WujzT2GFJESz6sgXypOXA+ry9uSM1AFkZgIIj7YfrFvtxYbWsEyzbhXKiOr8noIZGkfc+43imB+C2FgUp5ZwQSFnnxyIiXQWwKIjrOXbXE1r5SClA+FIAojDoectq/AbKwehIzD1ayHdfehF7BTfXJbkf64RgNcctGyjz0LPxY2mXC0kQXEFZSqJIOn5sys9wQEkjd4XlXA66oaJPV948m4ApJniNd9ohIVmXKAO5Bo6D4WQVvrA03w7PurWjJmpQuKNNwzAn2MMUfwfF0FiH9nxKa5/yEHRA/jTlNtqA/xOFC1uvGvgfWLDMfh+AtXxrNJXtp+qeATiUthHFK9ZRT6xaqkdd+LzySkLVyUCxpvEeOSKcHCqoxNBMZ5p9skmKbus5DRvzBSzPSGfBqh+7efuwSYYRveVZ2iqukef+cMJ5t+mlGuIAZulVVeLXhivpqH20o4d+WgBLNWpPZtP1w3vnds=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKDMbjmqVhbMiFxfeq71aiHzezH5+ve9aaRv6tecZ9yt#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD2a9/UKab06QjpszdfyP/8+Fmx0ghbxasoTU/24//g4p6oYwAMEXLcqU8YkQj66SK/B/CRmkko20tQpuvcB+LQ=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDNxuYL62ECxG4tKU506Q3pIBb6yt0LTfxUgzUGORrXbIq9WrYwVeb+Lkx8v046r7H1KM8BsXHHuc+/3UYA3ldToNXUkjnpV43woAUm6zBViUE4+fgkcOJmVpRTZ/uXPMGTCGECUFZ9zuo3AFkcF0ERCcieOSdVs4uPytJLM0anMY2JZ9BHHzwlK3u+R7I452i/2bTjizB5yGGjV/5usLKdzn3gANHxbNcnVh+sI8fLZDldSAoeh+Lmihzsfp+4optdWgF0GnEgV3ui8NyR+nrPN2A09+4jC0EKzW3P8PT6CaTEgt95tkEYJ0/ihBlX210GmX32GEZfnHIOSflIiIeeAz/8vomjGlRwArfsmlOxT56Q9rekK5hD2orlFCjOvrzfoJN7vvTaE/P8ls/6015TUzbkS2WqhMLJbIvNcumWshvtYifwfnwMI2BK7YTHKpx1Qc/3anJqszHfO0G7ar3+3DemlY50qxApCrKUlE/w1rQtiN1VKmlioP2XpCmwe1s=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKm9ziDthsQekJ2ppuyoRsJLe7WplMYSfdzI6Ftkcb9s#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAnzEG8a/rCCjdE5RU3Uk/1EHo5xwDY20eWwn6aeXJMS7blUnv3gyCa8WoIefjhilEbylrojzG4Tmv2ZgeeLQd4=#012 create=True mode=0644 path=/tmp/ansible.txb__jbo state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:57:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:07 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8003c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:08 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a5d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:08 np0005540825 python3.9[131970]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.txb__jbo' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:57:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 04:57:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:57:08.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 04:57:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:57:08.849Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 04:57:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:57:09 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v177: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:57:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:09 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003fb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:09 np0005540825 python3.9[132126]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.txb__jbo state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:57:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 04:57:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 04:57:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:57:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:57:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:57:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:57:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:57:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:57:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:57:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:57:09.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:57:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:09 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:09 np0005540825 systemd[1]: session-45.scope: Deactivated successfully.
Dec  1 04:57:09 np0005540825 systemd[1]: session-45.scope: Consumed 6.313s CPU time.
Dec  1 04:57:09 np0005540825 systemd-logind[789]: Session 45 logged out. Waiting for processes to exit.
Dec  1 04:57:09 np0005540825 systemd-logind[789]: Removed session 45.
Dec  1 04:57:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:10 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8003cb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:57:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:57:10.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:57:11 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v178: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  1 04:57:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:11 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a5f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:57:11] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Dec  1 04:57:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:57:11] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Dec  1 04:57:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:57:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:57:11.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:57:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:11 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003fd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:12 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4001e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:57:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:57:12.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:57:13 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v179: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:57:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:13 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194000fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:57:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:57:13.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:57:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:13 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a5f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:57:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:14 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003fd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 04:57:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:57:14.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 04:57:15 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v180: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:57:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:15 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003fd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:15 np0005540825 systemd-logind[789]: New session 46 of user zuul.
Dec  1 04:57:15 np0005540825 systemd[1]: Started Session 46 of User zuul.
Dec  1 04:57:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:57:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:57:15.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:57:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:15 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194000fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:16 np0005540825 python3.9[132312]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:57:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:16 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a5f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:57:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:57:16.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:57:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:57:17.007Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:57:17 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v181: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  1 04:57:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:17 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4001e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:57:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:57:17.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:57:17 np0005540825 python3.9[132495]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec  1 04:57:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:17 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003fd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:18 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11940020b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:57:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:57:18.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:57:18 np0005540825 python3.9[132651]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 04:57:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:57:18.850Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:57:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:57:19 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v182: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:57:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:19 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a5f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 04:57:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:57:19.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 04:57:19 np0005540825 python3.9[132806]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:57:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:19 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c40030a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:20 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003fd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 04:57:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:57:20.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 04:57:20 np0005540825 python3.9[132959]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:57:21 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v183: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  1 04:57:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:21 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194002230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:57:21] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Dec  1 04:57:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:57:21] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Dec  1 04:57:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 04:57:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:57:21.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 04:57:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:21 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a5f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:21 np0005540825 python3.9[133113]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:57:22 np0005540825 systemd[1]: session-46.scope: Deactivated successfully.
Dec  1 04:57:22 np0005540825 systemd[1]: session-46.scope: Consumed 4.743s CPU time.
Dec  1 04:57:22 np0005540825 systemd-logind[789]: Session 46 logged out. Waiting for processes to exit.
Dec  1 04:57:22 np0005540825 systemd-logind[789]: Removed session 46.
Dec  1 04:57:22 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:22 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c40030a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 04:57:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:57:22.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 04:57:23 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v184: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:57:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:23 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003fd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:57:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:57:23.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:57:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:23 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194002b50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:57:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 04:57:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 04:57:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:24 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a5f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:57:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:57:24.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:57:25 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v185: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:57:25 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/095725 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 04:57:25 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:25 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c40030a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 04:57:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:57:25.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 04:57:25 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:25 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003fd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:26 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:26 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194002b50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:57:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:57:26.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:57:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:57:27.008Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:57:27 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v186: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  1 04:57:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:27 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a5f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:57:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:57:27.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:57:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:27 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c40030a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:28 np0005540825 systemd-logind[789]: New session 47 of user zuul.
Dec  1 04:57:28 np0005540825 systemd[1]: Started Session 47 of User zuul.
Dec  1 04:57:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:28 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003fd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:57:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:57:28.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:57:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:57:28.851Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 04:57:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:57:28.852Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 04:57:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:57:28.852Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 04:57:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:57:29 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v187: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  1 04:57:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:29 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194003860 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:29 np0005540825 python3.9[133298]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:57:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  1 04:57:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:57:29.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  1 04:57:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:29 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194003860 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:30 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:30 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a5f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 04:57:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:57:30.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 04:57:30 np0005540825 python3.9[133455]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 04:57:31 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v188: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:57:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:31 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003fd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:57:31] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Dec  1 04:57:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:57:31] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Dec  1 04:57:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 04:57:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:57:31.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 04:57:31 np0005540825 python3.9[133541]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  1 04:57:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:31 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194003860 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:32 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194003860 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:57:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:57:32.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:57:33 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v189: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  1 04:57:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:33 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c40030a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 04:57:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:57:33.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 04:57:33 np0005540825 python3.9[133696]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:57:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:33 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a5f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:33 : epoch 692d661d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 04:57:34 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:57:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-crash-compute-0[79836]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Dec  1 04:57:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:34 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003fd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:57:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:57:34.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:57:35 np0005540825 python3.9[133847]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  1 04:57:35 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v190: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  1 04:57:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:35 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194003860 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 04:57:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:57:35.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 04:57:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:35 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4004980 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:36 np0005540825 python3.9[133999]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:57:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:36 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a5f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 04:57:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:57:36.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 04:57:36 np0005540825 python3.9[134149]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:57:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:36 : epoch 692d661d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 04:57:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:36 : epoch 692d661d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 04:57:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:57:37.009Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 04:57:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:57:37.010Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:57:37 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v191: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec  1 04:57:37 np0005540825 systemd[1]: session-47.scope: Deactivated successfully.
Dec  1 04:57:37 np0005540825 systemd[1]: session-47.scope: Consumed 6.675s CPU time.
Dec  1 04:57:37 np0005540825 systemd-logind[789]: Session 47 logged out. Waiting for processes to exit.
Dec  1 04:57:37 np0005540825 systemd-logind[789]: Removed session 47.
Dec  1 04:57:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:37 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003fd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:57:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:57:37.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:57:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:37 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194003860 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:38 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4004980 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:57:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:57:38.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:57:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:57:38.853Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:57:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v192: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec  1 04:57:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:39 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a5f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_09:57:39
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'backups', '.rgw.root', '.mgr', 'default.rgw.meta', 'volumes', 'default.rgw.control', 'images', 'cephfs.cephfs.data', '.nfs', 'default.rgw.log']
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 04:57:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 04:57:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 04:57:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:57:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:57:39.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 04:57:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 04:57:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:39 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003fd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:39 : epoch 692d661d : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  1 04:57:40 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:40 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194003860 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:57:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:57:40.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:57:41 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v193: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  1 04:57:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:41 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4004980 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:57:41] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Dec  1 04:57:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:57:41] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Dec  1 04:57:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:57:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:57:41.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:57:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:41 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a5f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:42 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:42 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003fd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 04:57:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:57:42.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 04:57:43 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v194: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  1 04:57:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:43 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194003860 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:43 np0005540825 systemd-logind[789]: New session 48 of user zuul.
Dec  1 04:57:43 np0005540825 systemd[1]: Started Session 48 of User zuul.
Dec  1 04:57:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:57:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:57:43.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:57:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:43 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4004980 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:44 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:57:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:44 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a5f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:44 np0005540825 python3.9[134360]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:57:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 04:57:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:57:44.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 04:57:45 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v195: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  1 04:57:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/095745 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  1 04:57:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:45 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003fd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 04:57:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:57:45.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 04:57:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:45 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194003860 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:46 np0005540825 python3.9[134518]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:57:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:46 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4004980 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 04:57:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:57:46.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 04:57:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:57:47.011Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:57:47 np0005540825 python3.9[134670]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:57:47 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v196: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  1 04:57:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:47 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a5f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 04:57:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:57:47.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 04:57:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:47 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:47 np0005540825 python3.9[134824]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:57:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:48 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:57:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:57:48.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:57:48 np0005540825 python3.9[134947]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764583067.335311-159-185715001141436/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=833789ce570f9ad8ca1e4ced6a996586bfe06d74 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:57:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:57:48.854Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:57:49 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:57:49 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v197: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec  1 04:57:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:49 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4004980 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:49 np0005540825 python3.9[135100]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:57:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 04:57:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:57:49.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 04:57:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:49 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a5f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:49 np0005540825 python3.9[135224]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764583068.896876-159-69271669308013/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=cb547b0bb0278866a992ba3ec36d52c9fc332990 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:57:50 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:50 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194003860 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:57:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:57:50.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:57:50 np0005540825 python3.9[135376]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:57:51 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v198: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 2 op/s
Dec  1 04:57:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:51 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0004190 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:51 np0005540825 python3.9[135550]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764583070.1877038-159-237169035310160/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=172965010451eb96d8a62a299f5a12d31d04062e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:57:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:57:51] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Dec  1 04:57:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:57:51] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Dec  1 04:57:51 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  1 04:57:51 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:57:51 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  1 04:57:51 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:57:51 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:57:51 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:57:51 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:57:51 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:57:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:57:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:57:51.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:57:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:51 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4004980 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:52 np0005540825 python3.9[135773]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:57:52 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:57:52 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:57:52 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 04:57:52 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 04:57:52 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v199: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 187 B/s rd, 0 B/s wr, 0 op/s
Dec  1 04:57:52 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 04:57:52 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:57:52 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 04:57:52 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:57:52 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 04:57:52 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 04:57:52 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 04:57:52 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 04:57:52 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:57:52 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:57:52 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:57:52 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:57:52 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:57:52 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:57:52 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 04:57:52 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:57:52 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:57:52 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 04:57:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:52 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a5f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 04:57:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:57:52.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 04:57:52 np0005540825 podman[136044]: 2025-12-01 09:57:52.671715076 +0000 UTC m=+0.038879935 container create cad7c281b194c757e06f5f1861cdbe7f1d937ab4853c694f3261e5b1e7208f03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_dewdney, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  1 04:57:52 np0005540825 python3.9[136011]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:57:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/095752 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 04:57:52 np0005540825 systemd[1]: Started libpod-conmon-cad7c281b194c757e06f5f1861cdbe7f1d937ab4853c694f3261e5b1e7208f03.scope.
Dec  1 04:57:52 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:57:52 np0005540825 podman[136044]: 2025-12-01 09:57:52.655396947 +0000 UTC m=+0.022561796 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:57:52 np0005540825 podman[136044]: 2025-12-01 09:57:52.771625567 +0000 UTC m=+0.138790436 container init cad7c281b194c757e06f5f1861cdbe7f1d937ab4853c694f3261e5b1e7208f03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_dewdney, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec  1 04:57:52 np0005540825 podman[136044]: 2025-12-01 09:57:52.784152269 +0000 UTC m=+0.151317118 container start cad7c281b194c757e06f5f1861cdbe7f1d937ab4853c694f3261e5b1e7208f03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_dewdney, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:57:52 np0005540825 podman[136044]: 2025-12-01 09:57:52.787903365 +0000 UTC m=+0.155068304 container attach cad7c281b194c757e06f5f1861cdbe7f1d937ab4853c694f3261e5b1e7208f03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec  1 04:57:52 np0005540825 trusting_dewdney[136061]: 167 167
Dec  1 04:57:52 np0005540825 systemd[1]: libpod-cad7c281b194c757e06f5f1861cdbe7f1d937ab4853c694f3261e5b1e7208f03.scope: Deactivated successfully.
Dec  1 04:57:52 np0005540825 podman[136044]: 2025-12-01 09:57:52.793280376 +0000 UTC m=+0.160445225 container died cad7c281b194c757e06f5f1861cdbe7f1d937ab4853c694f3261e5b1e7208f03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  1 04:57:52 np0005540825 systemd[1]: var-lib-containers-storage-overlay-e364ff9936a0d1c0246842ba470e0b602d4ffe2e4252a8ddb1cef43150893a4a-merged.mount: Deactivated successfully.
Dec  1 04:57:52 np0005540825 podman[136044]: 2025-12-01 09:57:52.84174605 +0000 UTC m=+0.208910929 container remove cad7c281b194c757e06f5f1861cdbe7f1d937ab4853c694f3261e5b1e7208f03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_dewdney, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:57:52 np0005540825 systemd[1]: libpod-conmon-cad7c281b194c757e06f5f1861cdbe7f1d937ab4853c694f3261e5b1e7208f03.scope: Deactivated successfully.
Dec  1 04:57:53 np0005540825 podman[136149]: 2025-12-01 09:57:53.023296197 +0000 UTC m=+0.069002072 container create e4f5cbbb3c95ba8878416b4f39d150d38e3b295b9a663a141383328f2123423c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_jang, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:57:53 np0005540825 systemd[1]: Started libpod-conmon-e4f5cbbb3c95ba8878416b4f39d150d38e3b295b9a663a141383328f2123423c.scope.
Dec  1 04:57:53 np0005540825 podman[136149]: 2025-12-01 09:57:52.99425531 +0000 UTC m=+0.039961235 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:57:53 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:57:53 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d2b4236f1b90993e0645ce083a5d7d788e7a003feca849dcef611da9c0f1dc5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 04:57:53 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d2b4236f1b90993e0645ce083a5d7d788e7a003feca849dcef611da9c0f1dc5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:57:53 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d2b4236f1b90993e0645ce083a5d7d788e7a003feca849dcef611da9c0f1dc5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:57:53 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d2b4236f1b90993e0645ce083a5d7d788e7a003feca849dcef611da9c0f1dc5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 04:57:53 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d2b4236f1b90993e0645ce083a5d7d788e7a003feca849dcef611da9c0f1dc5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 04:57:53 np0005540825 podman[136149]: 2025-12-01 09:57:53.137437309 +0000 UTC m=+0.183143174 container init e4f5cbbb3c95ba8878416b4f39d150d38e3b295b9a663a141383328f2123423c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_jang, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  1 04:57:53 np0005540825 podman[136149]: 2025-12-01 09:57:53.153012087 +0000 UTC m=+0.198717932 container start e4f5cbbb3c95ba8878416b4f39d150d38e3b295b9a663a141383328f2123423c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:57:53 np0005540825 podman[136149]: 2025-12-01 09:57:53.156773893 +0000 UTC m=+0.202479778 container attach e4f5cbbb3c95ba8878416b4f39d150d38e3b295b9a663a141383328f2123423c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_jang, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  1 04:57:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:53 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194003860 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:53 np0005540825 python3.9[136258]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:57:53 np0005540825 elated_jang[136202]: --> passed data devices: 0 physical, 1 LVM
Dec  1 04:57:53 np0005540825 elated_jang[136202]: --> All data devices are unavailable
Dec  1 04:57:53 np0005540825 systemd[1]: libpod-e4f5cbbb3c95ba8878416b4f39d150d38e3b295b9a663a141383328f2123423c.scope: Deactivated successfully.
Dec  1 04:57:53 np0005540825 podman[136149]: 2025-12-01 09:57:53.519504317 +0000 UTC m=+0.565210172 container died e4f5cbbb3c95ba8878416b4f39d150d38e3b295b9a663a141383328f2123423c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:57:53 np0005540825 systemd[1]: var-lib-containers-storage-overlay-5d2b4236f1b90993e0645ce083a5d7d788e7a003feca849dcef611da9c0f1dc5-merged.mount: Deactivated successfully.
Dec  1 04:57:53 np0005540825 podman[136149]: 2025-12-01 09:57:53.563841265 +0000 UTC m=+0.609547110 container remove e4f5cbbb3c95ba8878416b4f39d150d38e3b295b9a663a141383328f2123423c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_jang, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:57:53 np0005540825 systemd[1]: libpod-conmon-e4f5cbbb3c95ba8878416b4f39d150d38e3b295b9a663a141383328f2123423c.scope: Deactivated successfully.
Dec  1 04:57:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:57:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:57:53.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:57:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:53 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0004190 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:53 np0005540825 python3.9[136452]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764583072.889525-346-131043039302606/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=551200f78daf6214d07c24ae0e58aa9115b41b52 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:57:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:57:54 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v200: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 187 B/s rd, 0 B/s wr, 0 op/s
Dec  1 04:57:54 np0005540825 podman[136566]: 2025-12-01 09:57:54.201093492 +0000 UTC m=+0.045836501 container create 7f93c6e19f0658c016add11a288bd58213ee38eda0216ec891420a0bad83b8ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  1 04:57:54 np0005540825 systemd[1]: Started libpod-conmon-7f93c6e19f0658c016add11a288bd58213ee38eda0216ec891420a0bad83b8ef.scope.
Dec  1 04:57:54 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:57:54 np0005540825 podman[136566]: 2025-12-01 09:57:54.18006749 +0000 UTC m=+0.024810499 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:57:54 np0005540825 podman[136566]: 2025-12-01 09:57:54.286743902 +0000 UTC m=+0.131486941 container init 7f93c6e19f0658c016add11a288bd58213ee38eda0216ec891420a0bad83b8ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec  1 04:57:54 np0005540825 podman[136566]: 2025-12-01 09:57:54.293109441 +0000 UTC m=+0.137852440 container start 7f93c6e19f0658c016add11a288bd58213ee38eda0216ec891420a0bad83b8ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:57:54 np0005540825 podman[136566]: 2025-12-01 09:57:54.297084223 +0000 UTC m=+0.141827202 container attach 7f93c6e19f0658c016add11a288bd58213ee38eda0216ec891420a0bad83b8ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_mcnulty, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid)
Dec  1 04:57:54 np0005540825 elastic_mcnulty[136611]: 167 167
Dec  1 04:57:54 np0005540825 systemd[1]: libpod-7f93c6e19f0658c016add11a288bd58213ee38eda0216ec891420a0bad83b8ef.scope: Deactivated successfully.
Dec  1 04:57:54 np0005540825 podman[136566]: 2025-12-01 09:57:54.298813091 +0000 UTC m=+0.143556090 container died 7f93c6e19f0658c016add11a288bd58213ee38eda0216ec891420a0bad83b8ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:57:54 np0005540825 systemd[1]: var-lib-containers-storage-overlay-02328aa62bf50a67d2d511c03fb8ed1c54864899bb86b1977b283a21cf5e06a4-merged.mount: Deactivated successfully.
Dec  1 04:57:54 np0005540825 podman[136566]: 2025-12-01 09:57:54.342194332 +0000 UTC m=+0.186937331 container remove 7f93c6e19f0658c016add11a288bd58213ee38eda0216ec891420a0bad83b8ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:57:54 np0005540825 systemd[1]: libpod-conmon-7f93c6e19f0658c016add11a288bd58213ee38eda0216ec891420a0bad83b8ef.scope: Deactivated successfully.
Dec  1 04:57:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 04:57:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 04:57:54 np0005540825 podman[136690]: 2025-12-01 09:57:54.54474282 +0000 UTC m=+0.074153457 container create f00b8f734b8950b57443c639aac4c6149e209949fbd7a62906902a647f1e91fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec  1 04:57:54 np0005540825 python3.9[136684]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:57:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:54 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4004980 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:54 np0005540825 podman[136690]: 2025-12-01 09:57:54.493940461 +0000 UTC m=+0.023351188 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:57:54 np0005540825 systemd[1]: Started libpod-conmon-f00b8f734b8950b57443c639aac4c6149e209949fbd7a62906902a647f1e91fd.scope.
Dec  1 04:57:54 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:57:54 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05cfc7cb4f00bc07b6f32e6c276854dad19abe0de3dce5c7cae1583fcc290fb9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 04:57:54 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05cfc7cb4f00bc07b6f32e6c276854dad19abe0de3dce5c7cae1583fcc290fb9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:57:54 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05cfc7cb4f00bc07b6f32e6c276854dad19abe0de3dce5c7cae1583fcc290fb9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:57:54 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05cfc7cb4f00bc07b6f32e6c276854dad19abe0de3dce5c7cae1583fcc290fb9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 04:57:54 np0005540825 podman[136690]: 2025-12-01 09:57:54.64426211 +0000 UTC m=+0.173672817 container init f00b8f734b8950b57443c639aac4c6149e209949fbd7a62906902a647f1e91fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_elbakyan, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 04:57:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:57:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:57:54.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:57:54 np0005540825 podman[136690]: 2025-12-01 09:57:54.651189945 +0000 UTC m=+0.180600572 container start f00b8f734b8950b57443c639aac4c6149e209949fbd7a62906902a647f1e91fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_elbakyan, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  1 04:57:54 np0005540825 podman[136690]: 2025-12-01 09:57:54.654732544 +0000 UTC m=+0.184143191 container attach f00b8f734b8950b57443c639aac4c6149e209949fbd7a62906902a647f1e91fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_elbakyan, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  1 04:57:54 np0005540825 elastic_elbakyan[136714]: {
Dec  1 04:57:54 np0005540825 elastic_elbakyan[136714]:    "1": [
Dec  1 04:57:54 np0005540825 elastic_elbakyan[136714]:        {
Dec  1 04:57:54 np0005540825 elastic_elbakyan[136714]:            "devices": [
Dec  1 04:57:54 np0005540825 elastic_elbakyan[136714]:                "/dev/loop3"
Dec  1 04:57:54 np0005540825 elastic_elbakyan[136714]:            ],
Dec  1 04:57:54 np0005540825 elastic_elbakyan[136714]:            "lv_name": "ceph_lv0",
Dec  1 04:57:54 np0005540825 elastic_elbakyan[136714]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 04:57:54 np0005540825 elastic_elbakyan[136714]:            "lv_size": "21470642176",
Dec  1 04:57:54 np0005540825 elastic_elbakyan[136714]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 04:57:54 np0005540825 elastic_elbakyan[136714]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 04:57:54 np0005540825 elastic_elbakyan[136714]:            "name": "ceph_lv0",
Dec  1 04:57:54 np0005540825 elastic_elbakyan[136714]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 04:57:54 np0005540825 elastic_elbakyan[136714]:            "tags": {
Dec  1 04:57:54 np0005540825 elastic_elbakyan[136714]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 04:57:54 np0005540825 elastic_elbakyan[136714]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 04:57:54 np0005540825 elastic_elbakyan[136714]:                "ceph.cephx_lockbox_secret": "",
Dec  1 04:57:54 np0005540825 elastic_elbakyan[136714]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 04:57:54 np0005540825 elastic_elbakyan[136714]:                "ceph.cluster_name": "ceph",
Dec  1 04:57:54 np0005540825 elastic_elbakyan[136714]:                "ceph.crush_device_class": "",
Dec  1 04:57:54 np0005540825 elastic_elbakyan[136714]:                "ceph.encrypted": "0",
Dec  1 04:57:54 np0005540825 elastic_elbakyan[136714]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 04:57:54 np0005540825 elastic_elbakyan[136714]:                "ceph.osd_id": "1",
Dec  1 04:57:54 np0005540825 elastic_elbakyan[136714]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 04:57:54 np0005540825 elastic_elbakyan[136714]:                "ceph.type": "block",
Dec  1 04:57:54 np0005540825 elastic_elbakyan[136714]:                "ceph.vdo": "0",
Dec  1 04:57:54 np0005540825 elastic_elbakyan[136714]:                "ceph.with_tpm": "0"
Dec  1 04:57:54 np0005540825 elastic_elbakyan[136714]:            },
Dec  1 04:57:54 np0005540825 elastic_elbakyan[136714]:            "type": "block",
Dec  1 04:57:54 np0005540825 elastic_elbakyan[136714]:            "vg_name": "ceph_vg0"
Dec  1 04:57:54 np0005540825 elastic_elbakyan[136714]:        }
Dec  1 04:57:54 np0005540825 elastic_elbakyan[136714]:    ]
Dec  1 04:57:54 np0005540825 elastic_elbakyan[136714]: }
Dec  1 04:57:55 np0005540825 systemd[1]: libpod-f00b8f734b8950b57443c639aac4c6149e209949fbd7a62906902a647f1e91fd.scope: Deactivated successfully.
Dec  1 04:57:55 np0005540825 podman[136690]: 2025-12-01 09:57:55.015977358 +0000 UTC m=+0.545388005 container died f00b8f734b8950b57443c639aac4c6149e209949fbd7a62906902a647f1e91fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Dec  1 04:57:55 np0005540825 systemd[1]: var-lib-containers-storage-overlay-05cfc7cb4f00bc07b6f32e6c276854dad19abe0de3dce5c7cae1583fcc290fb9-merged.mount: Deactivated successfully.
Dec  1 04:57:55 np0005540825 podman[136690]: 2025-12-01 09:57:55.080887984 +0000 UTC m=+0.610298621 container remove f00b8f734b8950b57443c639aac4c6149e209949fbd7a62906902a647f1e91fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_elbakyan, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 04:57:55 np0005540825 systemd[1]: libpod-conmon-f00b8f734b8950b57443c639aac4c6149e209949fbd7a62906902a647f1e91fd.scope: Deactivated successfully.
Dec  1 04:57:55 np0005540825 python3.9[136837]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764583074.089204-346-124767745699582/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=35a392a510e9baafc6c00afe5c05a05ddead468b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:57:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:55 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:57:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:57:55.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:57:55 np0005540825 podman[137092]: 2025-12-01 09:57:55.704774346 +0000 UTC m=+0.046421167 container create 68d1b429b309ea1e7fb9d39f2ecc49daff5d49c5c216af55a4f3827f3cb28c48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  1 04:57:55 np0005540825 systemd[1]: Started libpod-conmon-68d1b429b309ea1e7fb9d39f2ecc49daff5d49c5c216af55a4f3827f3cb28c48.scope.
Dec  1 04:57:55 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:57:55 np0005540825 podman[137092]: 2025-12-01 09:57:55.775816945 +0000 UTC m=+0.117463766 container init 68d1b429b309ea1e7fb9d39f2ecc49daff5d49c5c216af55a4f3827f3cb28c48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_maxwell, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  1 04:57:55 np0005540825 podman[137092]: 2025-12-01 09:57:55.685427672 +0000 UTC m=+0.027074533 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:57:55 np0005540825 podman[137092]: 2025-12-01 09:57:55.786443604 +0000 UTC m=+0.128090425 container start 68d1b429b309ea1e7fb9d39f2ecc49daff5d49c5c216af55a4f3827f3cb28c48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_maxwell, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec  1 04:57:55 np0005540825 podman[137092]: 2025-12-01 09:57:55.789993163 +0000 UTC m=+0.131639994 container attach 68d1b429b309ea1e7fb9d39f2ecc49daff5d49c5c216af55a4f3827f3cb28c48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_maxwell, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  1 04:57:55 np0005540825 tender_maxwell[137110]: 167 167
Dec  1 04:57:55 np0005540825 systemd[1]: libpod-68d1b429b309ea1e7fb9d39f2ecc49daff5d49c5c216af55a4f3827f3cb28c48.scope: Deactivated successfully.
Dec  1 04:57:55 np0005540825 podman[137092]: 2025-12-01 09:57:55.792115633 +0000 UTC m=+0.133762474 container died 68d1b429b309ea1e7fb9d39f2ecc49daff5d49c5c216af55a4f3827f3cb28c48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec  1 04:57:55 np0005540825 systemd[1]: var-lib-containers-storage-overlay-485b3b1bdd3fc694950cbfef24242478ebb7d92bfb00e2c1f18d40e12cd301ff-merged.mount: Deactivated successfully.
Dec  1 04:57:55 np0005540825 podman[137092]: 2025-12-01 09:57:55.836947534 +0000 UTC m=+0.178594345 container remove 68d1b429b309ea1e7fb9d39f2ecc49daff5d49c5c216af55a4f3827f3cb28c48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 04:57:55 np0005540825 systemd[1]: libpod-conmon-68d1b429b309ea1e7fb9d39f2ecc49daff5d49c5c216af55a4f3827f3cb28c48.scope: Deactivated successfully.
Dec  1 04:57:55 np0005540825 python3.9[137095]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:57:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:55 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194003860 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:56 np0005540825 podman[137159]: 2025-12-01 09:57:56.044575466 +0000 UTC m=+0.057579561 container create fd0b9749ca6dccc1ab4f35c8f734e6c668bd0b80bc5efe66bd93d221a05fdb38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_ramanujan, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:57:56 np0005540825 systemd[1]: Started libpod-conmon-fd0b9749ca6dccc1ab4f35c8f734e6c668bd0b80bc5efe66bd93d221a05fdb38.scope.
Dec  1 04:57:56 np0005540825 podman[137159]: 2025-12-01 09:57:56.024565403 +0000 UTC m=+0.037569528 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:57:56 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v201: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 281 B/s rd, 0 B/s wr, 0 op/s
Dec  1 04:57:56 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:57:56 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddc5ce1a0de3c5606fbbe97a7e4cac0a8f4cf3a71ae21434e7fa5bd18dd5e2dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 04:57:56 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddc5ce1a0de3c5606fbbe97a7e4cac0a8f4cf3a71ae21434e7fa5bd18dd5e2dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:57:56 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddc5ce1a0de3c5606fbbe97a7e4cac0a8f4cf3a71ae21434e7fa5bd18dd5e2dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:57:56 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddc5ce1a0de3c5606fbbe97a7e4cac0a8f4cf3a71ae21434e7fa5bd18dd5e2dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 04:57:56 np0005540825 podman[137159]: 2025-12-01 09:57:56.181509808 +0000 UTC m=+0.194513943 container init fd0b9749ca6dccc1ab4f35c8f734e6c668bd0b80bc5efe66bd93d221a05fdb38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_ramanujan, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  1 04:57:56 np0005540825 podman[137159]: 2025-12-01 09:57:56.198178547 +0000 UTC m=+0.211182672 container start fd0b9749ca6dccc1ab4f35c8f734e6c668bd0b80bc5efe66bd93d221a05fdb38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:57:56 np0005540825 podman[137159]: 2025-12-01 09:57:56.202446147 +0000 UTC m=+0.215450242 container attach fd0b9749ca6dccc1ab4f35c8f734e6c668bd0b80bc5efe66bd93d221a05fdb38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_ramanujan, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:57:56 np0005540825 python3.9[137279]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764583075.3646958-346-36183108797287/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=b69ade7155ea28c764b681386351919d078f3581 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:57:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:56 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0004190 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:57:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:57:56.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:57:56 np0005540825 lvm[137454]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 04:57:56 np0005540825 lvm[137454]: VG ceph_vg0 finished
Dec  1 04:57:56 np0005540825 condescending_ramanujan[137222]: {}
Dec  1 04:57:56 np0005540825 systemd[1]: libpod-fd0b9749ca6dccc1ab4f35c8f734e6c668bd0b80bc5efe66bd93d221a05fdb38.scope: Deactivated successfully.
Dec  1 04:57:56 np0005540825 systemd[1]: libpod-fd0b9749ca6dccc1ab4f35c8f734e6c668bd0b80bc5efe66bd93d221a05fdb38.scope: Consumed 1.262s CPU time.
Dec  1 04:57:56 np0005540825 podman[137159]: 2025-12-01 09:57:56.95892102 +0000 UTC m=+0.971925155 container died fd0b9749ca6dccc1ab4f35c8f734e6c668bd0b80bc5efe66bd93d221a05fdb38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_ramanujan, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec  1 04:57:57 np0005540825 systemd[1]: var-lib-containers-storage-overlay-ddc5ce1a0de3c5606fbbe97a7e4cac0a8f4cf3a71ae21434e7fa5bd18dd5e2dc-merged.mount: Deactivated successfully.
Dec  1 04:57:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:57:57.012Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:57:57 np0005540825 podman[137159]: 2025-12-01 09:57:57.029573675 +0000 UTC m=+1.042577770 container remove fd0b9749ca6dccc1ab4f35c8f734e6c668bd0b80bc5efe66bd93d221a05fdb38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_ramanujan, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:57:57 np0005540825 systemd[1]: libpod-conmon-fd0b9749ca6dccc1ab4f35c8f734e6c668bd0b80bc5efe66bd93d221a05fdb38.scope: Deactivated successfully.
Dec  1 04:57:57 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:57:57 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:57:57 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:57:57 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:57:57 np0005540825 python3.9[137512]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:57:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:57 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4004980 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:57:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:57:57.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:57:57 np0005540825 python3.9[137722]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:57:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:57 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4004980 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:58 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:57:58 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:57:58 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v202: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 187 B/s rd, 0 op/s
Dec  1 04:57:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:58 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194003860 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:58 np0005540825 python3.9[137874]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:57:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 04:57:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:57:58.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 04:57:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:57:58.855Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:57:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:57:59 np0005540825 python3.9[137998]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764583078.0874095-530-255947952089501/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=085177ee46ea6062cb45d9b04d582dea571d2e6b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:57:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:59 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0004190 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:57:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:57:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:57:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:57:59.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:57:59 np0005540825 python3.9[138152]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:57:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:57:59 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a650 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:00 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v203: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 187 B/s rd, 0 op/s
Dec  1 04:58:00 np0005540825 python3.9[138275]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764583079.290681-530-112240886373454/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=35a392a510e9baafc6c00afe5c05a05ddead468b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:58:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:00 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4004980 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:58:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:58:00.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:58:01 np0005540825 python3.9[138427]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:58:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:01 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194003860 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:58:01] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Dec  1 04:58:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:58:01] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Dec  1 04:58:01 np0005540825 python3.9[138552]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764583080.5495374-530-125163415096131/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=22c3e125354278aaa9ab7e271fcce72e01584d5a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:58:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:58:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:58:01.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:58:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:01 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0004190 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:02 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v204: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 563 B/s rd, 93 B/s wr, 0 op/s
Dec  1 04:58:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:02 : epoch 692d661d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 04:58:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:02 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:58:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:58:02.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:58:02 np0005540825 python3.9[138704]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:58:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:03 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4004980 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:03 np0005540825 python3.9[138860]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:58:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:58:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:58:03.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:58:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:03 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194003860 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:58:04 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v205: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec  1 04:58:04 np0005540825 python3.9[138983]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764583083.0737214-744-93588546285613/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8c8748787c49c5bdccd5df153e138fac81f5459e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:58:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:04 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:58:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:58:04.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:58:05 np0005540825 python3.9[139135]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:58:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:05 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a700 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:05 : epoch 692d661d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 04:58:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:05 : epoch 692d661d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 04:58:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:58:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:58:05.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:58:05 np0005540825 python3.9[139289]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:58:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:05 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:06 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v206: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Dec  1 04:58:06 np0005540825 python3.9[139412]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764583085.2680118-821-199677335710748/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8c8748787c49c5bdccd5df153e138fac81f5459e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:58:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:06 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194003860 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 04:58:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:58:06.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 04:58:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:58:07.014Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 04:58:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:58:07.014Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:58:07 np0005540825 python3.9[139565]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:58:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:07 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194003860 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:07 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:07 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:58:07 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:58:07.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:58:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:07 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:08 np0005540825 python3.9[139718]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:58:08 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v207: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Dec  1 04:58:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:08 : epoch 692d661d : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  1 04:58:08 np0005540825 python3.9[139841]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764583087.4933686-900-278845038188051/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8c8748787c49c5bdccd5df153e138fac81f5459e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:58:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:08 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a700 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:58:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:58:08.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:58:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:58:08.856Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:58:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:58:09 np0005540825 python3.9[139994]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:58:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:09 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194003860 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 04:58:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 04:58:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:58:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:58:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:58:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:58:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:58:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:58:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 04:58:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:58:09.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 04:58:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:09 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:10 np0005540825 python3.9[140147]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:58:10 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v208: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Dec  1 04:58:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:10 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:10 np0005540825 python3.9[140270]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764583089.4780772-979-214168314695020/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8c8748787c49c5bdccd5df153e138fac81f5459e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:58:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 04:58:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:58:10.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 04:58:11 np0005540825 python3.9[140423]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:58:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:11 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a700 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:58:11] "GET /metrics HTTP/1.1" 200 48417 "" "Prometheus/2.51.0"
Dec  1 04:58:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:58:11] "GET /metrics HTTP/1.1" 200 48417 "" "Prometheus/2.51.0"
Dec  1 04:58:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:58:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:58:11.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:58:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:11 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:12 np0005540825 python3.9[140576]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:58:12 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v209: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  1 04:58:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:12 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:12 np0005540825 python3.9[140699]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764583091.5398798-1058-135782410126482/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8c8748787c49c5bdccd5df153e138fac81f5459e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:58:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:58:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:58:12.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:58:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:13 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:13 np0005540825 python3.9[140852]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:58:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:58:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:58:13.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:58:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:13 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a8a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:13 np0005540825 python3.9[141005]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:58:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:58:14 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v210: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Dec  1 04:58:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:14 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:14 np0005540825 python3.9[141128]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764583093.511433-1103-68530230222111/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8c8748787c49c5bdccd5df153e138fac81f5459e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:58:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:58:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:58:14.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:58:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/095814 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  1 04:58:15 np0005540825 systemd[1]: session-48.scope: Deactivated successfully.
Dec  1 04:58:15 np0005540825 systemd[1]: session-48.scope: Consumed 25.901s CPU time.
Dec  1 04:58:15 np0005540825 systemd-logind[789]: Session 48 logged out. Waiting for processes to exit.
Dec  1 04:58:15 np0005540825 systemd-logind[789]: Removed session 48.
Dec  1 04:58:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:15 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:58:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:58:15.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:58:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:15 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:16 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v211: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Dec  1 04:58:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:16 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a8c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:58:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:58:16.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:58:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:58:17.016Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:58:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:17 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:58:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:58:17.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:58:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:17 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:18 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v212: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec  1 04:58:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:18 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:58:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:58:18.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:58:18 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Dec  1 04:58:18 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:58:18.768371) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  1 04:58:18 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Dec  1 04:58:18 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764583098768419, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1134, "num_deletes": 251, "total_data_size": 2115633, "memory_usage": 2150200, "flush_reason": "Manual Compaction"}
Dec  1 04:58:18 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Dec  1 04:58:18 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764583098785921, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 2042220, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12168, "largest_seqno": 13300, "table_properties": {"data_size": 2036843, "index_size": 2837, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11298, "raw_average_key_size": 19, "raw_value_size": 2026011, "raw_average_value_size": 3457, "num_data_blocks": 126, "num_entries": 586, "num_filter_entries": 586, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764583000, "oldest_key_time": 1764583000, "file_creation_time": 1764583098, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Dec  1 04:58:18 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 17619 microseconds, and 8513 cpu microseconds.
Dec  1 04:58:18 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 04:58:18 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:58:18.785986) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 2042220 bytes OK
Dec  1 04:58:18 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:58:18.786015) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Dec  1 04:58:18 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:58:18.787558) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Dec  1 04:58:18 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:58:18.787583) EVENT_LOG_v1 {"time_micros": 1764583098787576, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  1 04:58:18 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:58:18.787604) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  1 04:58:18 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 2110564, prev total WAL file size 2110564, number of live WAL files 2.
Dec  1 04:58:18 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 04:58:18 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:58:18.788745) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Dec  1 04:58:18 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  1 04:58:18 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1994KB)], [29(13MB)]
Dec  1 04:58:18 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764583098788799, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 15897583, "oldest_snapshot_seqno": -1}
Dec  1 04:58:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:58:18.857Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:58:18 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4290 keys, 13793854 bytes, temperature: kUnknown
Dec  1 04:58:18 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764583098881124, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 13793854, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13762301, "index_size": 19731, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10757, "raw_key_size": 109722, "raw_average_key_size": 25, "raw_value_size": 13680974, "raw_average_value_size": 3189, "num_data_blocks": 832, "num_entries": 4290, "num_filter_entries": 4290, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764582410, "oldest_key_time": 0, "file_creation_time": 1764583098, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Dec  1 04:58:18 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 04:58:18 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:58:18.881607) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 13793854 bytes
Dec  1 04:58:18 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:58:18.883617) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.8 rd, 149.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 13.2 +0.0 blob) out(13.2 +0.0 blob), read-write-amplify(14.5) write-amplify(6.8) OK, records in: 4808, records dropped: 518 output_compression: NoCompression
Dec  1 04:58:18 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:58:18.883651) EVENT_LOG_v1 {"time_micros": 1764583098883636, "job": 12, "event": "compaction_finished", "compaction_time_micros": 92545, "compaction_time_cpu_micros": 48224, "output_level": 6, "num_output_files": 1, "total_output_size": 13793854, "num_input_records": 4808, "num_output_records": 4290, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  1 04:58:18 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 04:58:18 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764583098884856, "job": 12, "event": "table_file_deletion", "file_number": 31}
Dec  1 04:58:18 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 04:58:18 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764583098890015, "job": 12, "event": "table_file_deletion", "file_number": 29}
Dec  1 04:58:18 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:58:18.788645) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 04:58:18 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:58:18.890196) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 04:58:18 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:58:18.890203) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 04:58:18 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:58:18.890206) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 04:58:18 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:58:18.890209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 04:58:18 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-09:58:18.890211) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 04:58:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:58:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:19 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a8e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:58:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:58:19.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:58:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:19 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:20 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v213: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec  1 04:58:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:20 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:58:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:58:20.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:58:21 np0005540825 systemd-logind[789]: New session 49 of user zuul.
Dec  1 04:58:21 np0005540825 systemd[1]: Started Session 49 of User zuul.
Dec  1 04:58:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:21 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:58:21] "GET /metrics HTTP/1.1" 200 48417 "" "Prometheus/2.51.0"
Dec  1 04:58:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:58:21] "GET /metrics HTTP/1.1" 200 48417 "" "Prometheus/2.51.0"
Dec  1 04:58:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:58:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:58:21.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:58:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:21 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:22 np0005540825 python3.9[141341]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:58:22 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v214: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec  1 04:58:22 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:22 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:58:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:58:22.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:58:22 np0005540825 python3.9[141493]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:58:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:23 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:23 np0005540825 python3.9[141618]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764583102.2420204-62-116426221949318/.source.conf _original_basename=ceph.conf follow=False checksum=0a8180f0f80a13ef358ded9b1ade2f059a9b256f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:58:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:58:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:58:23.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:58:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:23 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:58:24 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v215: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec  1 04:58:24 np0005540825 python3.9[141770]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:58:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 04:58:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 04:58:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:24 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:58:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:58:24.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:58:24 np0005540825 python3.9[141893]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764583103.7950451-62-222901126824968/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=5a16a5bd4a7ebcbad903a4d80924389de6535d80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:58:25 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:25 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a960 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:25 np0005540825 systemd[1]: session-49.scope: Deactivated successfully.
Dec  1 04:58:25 np0005540825 systemd[1]: session-49.scope: Consumed 3.066s CPU time.
Dec  1 04:58:25 np0005540825 systemd-logind[789]: Session 49 logged out. Waiting for processes to exit.
Dec  1 04:58:25 np0005540825 systemd-logind[789]: Removed session 49.
Dec  1 04:58:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:58:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:58:25.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:58:25 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:25 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:26 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v216: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec  1 04:58:26 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:26 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:58:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:58:26.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:58:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:58:27.017Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:58:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:27 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:27 np0005540825 ceph-osd[82809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  1 04:58:27 np0005540825 ceph-osd[82809]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 8395 writes, 34K keys, 8395 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s#012Cumulative WAL: 8395 writes, 1674 syncs, 5.01 writes per sync, written: 0.02 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8395 writes, 34K keys, 8395 commit groups, 1.0 writes per commit group, ingest: 21.35 MB, 0.04 MB/s#012Interval WAL: 8395 writes, 1674 syncs, 5.01 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ea023ad350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ea023ad350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Dec  1 04:58:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:58:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:58:27.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:58:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:27 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a980 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:28 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v217: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:58:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:28 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a980 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:58:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:58:28.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:58:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:58:28.858Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:58:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:58:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:29 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:58:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:58:29.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:58:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:29 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:30 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v218: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:58:30 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:30 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a980 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 04:58:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:58:30.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 04:58:30 np0005540825 systemd-logind[789]: New session 50 of user zuul.
Dec  1 04:58:30 np0005540825 systemd[1]: Started Session 50 of User zuul.
Dec  1 04:58:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:31 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:58:31] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Dec  1 04:58:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:58:31] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Dec  1 04:58:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:58:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:58:31.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:58:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:31 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:32 np0005540825 python3.9[142079]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:58:32 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v219: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  1 04:58:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:32 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:58:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:58:32.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:58:33 np0005540825 python3.9[142236]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:58:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:33 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00a9a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 04:58:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:58:33.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 04:58:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:33 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:34 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:58:34 np0005540825 python3.9[142389]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:58:34 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v220: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:58:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:34 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:58:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:58:34.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:58:34 np0005540825 python3.9[142540]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:58:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:35 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4002330 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:58:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:58:35.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:58:35 np0005540825 python3.9[142695]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec  1 04:58:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:35 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4002330 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:36 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v221: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  1 04:58:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:36 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 04:58:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:58:36.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 04:58:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:58:37.018Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:58:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:37 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:37 np0005540825 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Dec  1 04:58:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:58:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:58:37.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:58:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:37 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:38 np0005540825 python3.9[142878]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 04:58:38 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v222: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:58:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:38 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4002330 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:58:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:58:38.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:58:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:58:38.859Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:58:38 np0005540825 python3.9[142962]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 04:58:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:58:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:39 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_09:58:39
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['.nfs', 'images', '.mgr', 'vms', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'default.rgw.meta', 'default.rgw.log']
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 04:58:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 04:58:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 04:58:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 04:58:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:58:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:58:39.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:58:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:39 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:40 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v223: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:58:40 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:40 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:58:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:58:40.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:58:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:41 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:58:41] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Dec  1 04:58:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:58:41] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Dec  1 04:58:41 np0005540825 python3.9[143117]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  1 04:58:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:58:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:58:41.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:58:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:41 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:42 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v224: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  1 04:58:42 np0005540825 python3[143274]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Dec  1 04:58:42 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:42 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:58:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:58:42.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:58:43 np0005540825 python3.9[143426]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:58:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:43 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:58:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:58:43.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:58:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:43 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:44 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:58:44 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v225: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:58:44 np0005540825 python3.9[143580]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:58:44 np0005540825 python3.9[143658]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:58:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:44 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:58:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:58:44.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:58:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:45 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:45 np0005540825 python3.9[143811]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:58:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:58:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:58:45.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:58:45 np0005540825 python3.9[143890]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.w09o6goj recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:58:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:45 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:46 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v226: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  1 04:58:46 np0005540825 python3.9[144042]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:58:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:46 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:58:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:58:46.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:58:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:58:47.019Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:58:47 np0005540825 python3.9[144120]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:58:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:47 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:58:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:58:47.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:58:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:47 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:47 np0005540825 python3.9[144274]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:58:48 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v227: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:58:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:48 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 04:58:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:58:48.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 04:58:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:58:48.860Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:58:48 np0005540825 python3[144427]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  1 04:58:49 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:58:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:49 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:58:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:58:49.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:58:49 np0005540825 python3.9[144581]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:58:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:49 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:50 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v228: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:58:50 np0005540825 python3.9[144706]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764583129.211309-431-261154634317834/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:58:50 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:50 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:58:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:58:50.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:58:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:58:51] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Dec  1 04:58:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:58:51] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Dec  1 04:58:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:51 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:51 np0005540825 python3.9[144859]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:58:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:58:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:58:51.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:58:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:51 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:52 np0005540825 python3.9[144985]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764583130.786653-476-172711473938279/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:58:52 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v229: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  1 04:58:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:52 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 04:58:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:58:52.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 04:58:52 np0005540825 python3.9[145137]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:58:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:53 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:53 np0005540825 python3.9[145263]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764583132.2312272-521-48843173971456/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:58:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:58:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:58:53.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:58:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:53 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:58:54 np0005540825 python3.9[145416]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:58:54 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v230: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:58:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 04:58:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 04:58:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:54 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:58:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:58:54.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:58:54 np0005540825 python3.9[145541]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764583133.6235406-566-46452210170439/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:58:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:55 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:55 np0005540825 python3.9[145695]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:58:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 04:58:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:58:55.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 04:58:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:55 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:56 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v231: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  1 04:58:56 np0005540825 python3.9[145820]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764583135.1594467-611-71110088945682/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:58:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:56 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:58:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:58:56.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:58:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:58:57.020Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:58:57 np0005540825 python3.9[145972]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:58:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:57 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:58:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:58:57.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:58:57 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  1 04:58:57 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:58:57 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  1 04:58:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:57 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:58 np0005540825 python3.9[146215]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:58:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:58:58 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v232: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:58:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:58 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:58:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:58:58.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:58:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:58:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:58:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 04:58:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 04:58:58 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v233: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 288 B/s rd, 0 op/s
Dec  1 04:58:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 04:58:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:58:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 04:58:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:58:58.861Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:58:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:58:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 04:58:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 04:58:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 04:58:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 04:58:58 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:58:58 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:58:58 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 04:58:58 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:58:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 04:58:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 04:58:58 np0005540825 python3.9[146387]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:58:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:58:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:59 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:59 np0005540825 podman[146592]: 2025-12-01 09:58:59.527514353 +0000 UTC m=+0.039662840 container create 2bd8ebd7249bff9ed09751cc70f05192ebc0b4139fd105218845fec4ccc0e994 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_babbage, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:58:59 np0005540825 systemd[1]: Started libpod-conmon-2bd8ebd7249bff9ed09751cc70f05192ebc0b4139fd105218845fec4ccc0e994.scope.
Dec  1 04:58:59 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:58:59 np0005540825 podman[146592]: 2025-12-01 09:58:59.51241372 +0000 UTC m=+0.024562217 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:58:59 np0005540825 podman[146592]: 2025-12-01 09:58:59.630547582 +0000 UTC m=+0.142696119 container init 2bd8ebd7249bff9ed09751cc70f05192ebc0b4139fd105218845fec4ccc0e994 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:58:59 np0005540825 podman[146592]: 2025-12-01 09:58:59.639545413 +0000 UTC m=+0.151693920 container start 2bd8ebd7249bff9ed09751cc70f05192ebc0b4139fd105218845fec4ccc0e994 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  1 04:58:59 np0005540825 gifted_babbage[146649]: 167 167
Dec  1 04:58:59 np0005540825 podman[146592]: 2025-12-01 09:58:59.643071757 +0000 UTC m=+0.155220304 container attach 2bd8ebd7249bff9ed09751cc70f05192ebc0b4139fd105218845fec4ccc0e994 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_babbage, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  1 04:58:59 np0005540825 systemd[1]: libpod-2bd8ebd7249bff9ed09751cc70f05192ebc0b4139fd105218845fec4ccc0e994.scope: Deactivated successfully.
Dec  1 04:58:59 np0005540825 podman[146592]: 2025-12-01 09:58:59.647568216 +0000 UTC m=+0.159716743 container died 2bd8ebd7249bff9ed09751cc70f05192ebc0b4139fd105218845fec4ccc0e994 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_babbage, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec  1 04:58:59 np0005540825 systemd[1]: var-lib-containers-storage-overlay-cac9b48cbff13a618d07c05cccd890229ab52fd9b3e40b82675b8ee8211369c5-merged.mount: Deactivated successfully.
Dec  1 04:58:59 np0005540825 podman[146592]: 2025-12-01 09:58:59.716517296 +0000 UTC m=+0.228665773 container remove 2bd8ebd7249bff9ed09751cc70f05192ebc0b4139fd105218845fec4ccc0e994 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec  1 04:58:59 np0005540825 systemd[1]: libpod-conmon-2bd8ebd7249bff9ed09751cc70f05192ebc0b4139fd105218845fec4ccc0e994.scope: Deactivated successfully.
Dec  1 04:58:59 np0005540825 python3.9[146651]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:58:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:58:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:58:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:58:59.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:58:59 np0005540825 podman[146674]: 2025-12-01 09:58:59.902880139 +0000 UTC m=+0.055745318 container create 2b6b750b243407894bc7296f886d15e542346f91df493156cc71b466d57367ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_varahamihira, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325)
Dec  1 04:58:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:58:59 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:58:59 np0005540825 systemd[1]: Started libpod-conmon-2b6b750b243407894bc7296f886d15e542346f91df493156cc71b466d57367ea.scope.
Dec  1 04:58:59 np0005540825 podman[146674]: 2025-12-01 09:58:59.877136282 +0000 UTC m=+0.030001471 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:58:59 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:58:59 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 04:58:59 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:58:59 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9208f1ffc5eb020426675ced40f1b52d7db3bf56b4f2a79984e30fe6c2fd963d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 04:58:59 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9208f1ffc5eb020426675ced40f1b52d7db3bf56b4f2a79984e30fe6c2fd963d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:58:59 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9208f1ffc5eb020426675ced40f1b52d7db3bf56b4f2a79984e30fe6c2fd963d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:58:59 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9208f1ffc5eb020426675ced40f1b52d7db3bf56b4f2a79984e30fe6c2fd963d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 04:58:59 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9208f1ffc5eb020426675ced40f1b52d7db3bf56b4f2a79984e30fe6c2fd963d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 04:59:00 np0005540825 podman[146674]: 2025-12-01 09:59:00.019715077 +0000 UTC m=+0.172580246 container init 2b6b750b243407894bc7296f886d15e542346f91df493156cc71b466d57367ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  1 04:59:00 np0005540825 podman[146674]: 2025-12-01 09:59:00.026001245 +0000 UTC m=+0.178866394 container start 2b6b750b243407894bc7296f886d15e542346f91df493156cc71b466d57367ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 04:59:00 np0005540825 podman[146674]: 2025-12-01 09:59:00.030921526 +0000 UTC m=+0.183786745 container attach 2b6b750b243407894bc7296f886d15e542346f91df493156cc71b466d57367ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Dec  1 04:59:00 np0005540825 interesting_varahamihira[146727]: --> passed data devices: 0 physical, 1 LVM
Dec  1 04:59:00 np0005540825 interesting_varahamihira[146727]: --> All data devices are unavailable
Dec  1 04:59:00 np0005540825 systemd[1]: libpod-2b6b750b243407894bc7296f886d15e542346f91df493156cc71b466d57367ea.scope: Deactivated successfully.
Dec  1 04:59:00 np0005540825 podman[146674]: 2025-12-01 09:59:00.407029482 +0000 UTC m=+0.559894621 container died 2b6b750b243407894bc7296f886d15e542346f91df493156cc71b466d57367ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:59:00 np0005540825 systemd[1]: var-lib-containers-storage-overlay-9208f1ffc5eb020426675ced40f1b52d7db3bf56b4f2a79984e30fe6c2fd963d-merged.mount: Deactivated successfully.
Dec  1 04:59:00 np0005540825 podman[146674]: 2025-12-01 09:59:00.455191627 +0000 UTC m=+0.608056756 container remove 2b6b750b243407894bc7296f886d15e542346f91df493156cc71b466d57367ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_varahamihira, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:59:00 np0005540825 systemd[1]: libpod-conmon-2b6b750b243407894bc7296f886d15e542346f91df493156cc71b466d57367ea.scope: Deactivated successfully.
Dec  1 04:59:00 np0005540825 python3.9[146852]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:59:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:00 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:59:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:59:00.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:59:00 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v234: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 480 B/s rd, 0 op/s
Dec  1 04:59:01 np0005540825 podman[147110]: 2025-12-01 09:59:01.093926662 +0000 UTC m=+0.045632041 container create 8b07045fdff1ce3ba63592b340e6960c3306e031664b345eb61e4d448cb9660c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 04:59:01 np0005540825 systemd[1]: Started libpod-conmon-8b07045fdff1ce3ba63592b340e6960c3306e031664b345eb61e4d448cb9660c.scope.
Dec  1 04:59:01 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:59:01 np0005540825 podman[147110]: 2025-12-01 09:59:01.073757763 +0000 UTC m=+0.025463172 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:59:01 np0005540825 podman[147110]: 2025-12-01 09:59:01.184543426 +0000 UTC m=+0.136248825 container init 8b07045fdff1ce3ba63592b340e6960c3306e031664b345eb61e4d448cb9660c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_wilson, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325)
Dec  1 04:59:01 np0005540825 podman[147110]: 2025-12-01 09:59:01.191325328 +0000 UTC m=+0.143030727 container start 8b07045fdff1ce3ba63592b340e6960c3306e031664b345eb61e4d448cb9660c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_wilson, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  1 04:59:01 np0005540825 podman[147110]: 2025-12-01 09:59:01.195280803 +0000 UTC m=+0.146986262 container attach 8b07045fdff1ce3ba63592b340e6960c3306e031664b345eb61e4d448cb9660c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_wilson, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1)
Dec  1 04:59:01 np0005540825 agitated_wilson[147129]: 167 167
Dec  1 04:59:01 np0005540825 systemd[1]: libpod-8b07045fdff1ce3ba63592b340e6960c3306e031664b345eb61e4d448cb9660c.scope: Deactivated successfully.
Dec  1 04:59:01 np0005540825 conmon[147129]: conmon 8b07045fdff1ce3ba635 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8b07045fdff1ce3ba63592b340e6960c3306e031664b345eb61e4d448cb9660c.scope/container/memory.events
Dec  1 04:59:01 np0005540825 podman[147110]: 2025-12-01 09:59:01.200583305 +0000 UTC m=+0.152288704 container died 8b07045fdff1ce3ba63592b340e6960c3306e031664b345eb61e4d448cb9660c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec  1 04:59:01 np0005540825 systemd[1]: var-lib-containers-storage-overlay-0d1368ffa3faf47e18fbe440b5e1c6c0e70d778b784f8950fd39e087b57c7b3b-merged.mount: Deactivated successfully.
Dec  1 04:59:01 np0005540825 podman[147110]: 2025-12-01 09:59:01.253782368 +0000 UTC m=+0.205487777 container remove 8b07045fdff1ce3ba63592b340e6960c3306e031664b345eb61e4d448cb9660c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_wilson, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  1 04:59:01 np0005540825 python3.9[147118]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:59:01 np0005540825 systemd[1]: libpod-conmon-8b07045fdff1ce3ba63592b340e6960c3306e031664b345eb61e4d448cb9660c.scope: Deactivated successfully.
Dec  1 04:59:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:59:01] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Dec  1 04:59:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:59:01] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Dec  1 04:59:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:01 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:01 np0005540825 podman[147178]: 2025-12-01 09:59:01.45048729 +0000 UTC m=+0.071829982 container create cd9dfe5feb10ab0f325b375e7352a31156e36feecf082ece5ad1aaef5c56400a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_murdock, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  1 04:59:01 np0005540825 systemd[1]: Started libpod-conmon-cd9dfe5feb10ab0f325b375e7352a31156e36feecf082ece5ad1aaef5c56400a.scope.
Dec  1 04:59:01 np0005540825 podman[147178]: 2025-12-01 09:59:01.417090527 +0000 UTC m=+0.038433289 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:59:01 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:59:01 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/025aea6b3df0cc8a4296abced0a34bbef16955b8ceda5310b74a329ca8e47100/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 04:59:01 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/025aea6b3df0cc8a4296abced0a34bbef16955b8ceda5310b74a329ca8e47100/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:59:01 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/025aea6b3df0cc8a4296abced0a34bbef16955b8ceda5310b74a329ca8e47100/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:59:01 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/025aea6b3df0cc8a4296abced0a34bbef16955b8ceda5310b74a329ca8e47100/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 04:59:01 np0005540825 podman[147178]: 2025-12-01 09:59:01.565676311 +0000 UTC m=+0.187019023 container init cd9dfe5feb10ab0f325b375e7352a31156e36feecf082ece5ad1aaef5c56400a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:59:01 np0005540825 podman[147178]: 2025-12-01 09:59:01.582608954 +0000 UTC m=+0.203951676 container start cd9dfe5feb10ab0f325b375e7352a31156e36feecf082ece5ad1aaef5c56400a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True)
Dec  1 04:59:01 np0005540825 podman[147178]: 2025-12-01 09:59:01.587008462 +0000 UTC m=+0.208351264 container attach cd9dfe5feb10ab0f325b375e7352a31156e36feecf082ece5ad1aaef5c56400a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_murdock, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec  1 04:59:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:59:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:59:01.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:59:01 np0005540825 elegant_murdock[147235]: {
Dec  1 04:59:01 np0005540825 elegant_murdock[147235]:    "1": [
Dec  1 04:59:01 np0005540825 elegant_murdock[147235]:        {
Dec  1 04:59:01 np0005540825 elegant_murdock[147235]:            "devices": [
Dec  1 04:59:01 np0005540825 elegant_murdock[147235]:                "/dev/loop3"
Dec  1 04:59:01 np0005540825 elegant_murdock[147235]:            ],
Dec  1 04:59:01 np0005540825 elegant_murdock[147235]:            "lv_name": "ceph_lv0",
Dec  1 04:59:01 np0005540825 elegant_murdock[147235]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 04:59:01 np0005540825 elegant_murdock[147235]:            "lv_size": "21470642176",
Dec  1 04:59:01 np0005540825 elegant_murdock[147235]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 04:59:01 np0005540825 elegant_murdock[147235]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 04:59:01 np0005540825 elegant_murdock[147235]:            "name": "ceph_lv0",
Dec  1 04:59:01 np0005540825 elegant_murdock[147235]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 04:59:01 np0005540825 elegant_murdock[147235]:            "tags": {
Dec  1 04:59:01 np0005540825 elegant_murdock[147235]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 04:59:01 np0005540825 elegant_murdock[147235]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 04:59:01 np0005540825 elegant_murdock[147235]:                "ceph.cephx_lockbox_secret": "",
Dec  1 04:59:01 np0005540825 elegant_murdock[147235]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 04:59:01 np0005540825 elegant_murdock[147235]:                "ceph.cluster_name": "ceph",
Dec  1 04:59:01 np0005540825 elegant_murdock[147235]:                "ceph.crush_device_class": "",
Dec  1 04:59:01 np0005540825 elegant_murdock[147235]:                "ceph.encrypted": "0",
Dec  1 04:59:01 np0005540825 elegant_murdock[147235]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 04:59:01 np0005540825 elegant_murdock[147235]:                "ceph.osd_id": "1",
Dec  1 04:59:01 np0005540825 elegant_murdock[147235]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 04:59:01 np0005540825 elegant_murdock[147235]:                "ceph.type": "block",
Dec  1 04:59:01 np0005540825 elegant_murdock[147235]:                "ceph.vdo": "0",
Dec  1 04:59:01 np0005540825 elegant_murdock[147235]:                "ceph.with_tpm": "0"
Dec  1 04:59:01 np0005540825 elegant_murdock[147235]:            },
Dec  1 04:59:01 np0005540825 elegant_murdock[147235]:            "type": "block",
Dec  1 04:59:01 np0005540825 elegant_murdock[147235]:            "vg_name": "ceph_vg0"
Dec  1 04:59:01 np0005540825 elegant_murdock[147235]:        }
Dec  1 04:59:01 np0005540825 elegant_murdock[147235]:    ]
Dec  1 04:59:01 np0005540825 elegant_murdock[147235]: }
Dec  1 04:59:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:01 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:01 np0005540825 systemd[1]: libpod-cd9dfe5feb10ab0f325b375e7352a31156e36feecf082ece5ad1aaef5c56400a.scope: Deactivated successfully.
Dec  1 04:59:01 np0005540825 podman[147178]: 2025-12-01 09:59:01.929967315 +0000 UTC m=+0.551310027 container died cd9dfe5feb10ab0f325b375e7352a31156e36feecf082ece5ad1aaef5c56400a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_murdock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 04:59:01 np0005540825 systemd[1]: var-lib-containers-storage-overlay-025aea6b3df0cc8a4296abced0a34bbef16955b8ceda5310b74a329ca8e47100-merged.mount: Deactivated successfully.
Dec  1 04:59:01 np0005540825 podman[147178]: 2025-12-01 09:59:01.997624555 +0000 UTC m=+0.618967247 container remove cd9dfe5feb10ab0f325b375e7352a31156e36feecf082ece5ad1aaef5c56400a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_murdock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:59:02 np0005540825 systemd[1]: libpod-conmon-cd9dfe5feb10ab0f325b375e7352a31156e36feecf082ece5ad1aaef5c56400a.scope: Deactivated successfully.
Dec  1 04:59:02 np0005540825 python3.9[147333]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:59:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:02 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:02 np0005540825 podman[147467]: 2025-12-01 09:59:02.683899293 +0000 UTC m=+0.050076941 container create a060a25c1ececff2ff3cd77abb76c28b7462657d1835ecd953ed39b7f7feb554 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_curran, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  1 04:59:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:59:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:59:02.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:59:02 np0005540825 systemd[1]: Started libpod-conmon-a060a25c1ececff2ff3cd77abb76c28b7462657d1835ecd953ed39b7f7feb554.scope.
Dec  1 04:59:02 np0005540825 podman[147467]: 2025-12-01 09:59:02.665570963 +0000 UTC m=+0.031748611 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:59:02 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:59:02 np0005540825 podman[147467]: 2025-12-01 09:59:02.78583972 +0000 UTC m=+0.152017398 container init a060a25c1ececff2ff3cd77abb76c28b7462657d1835ecd953ed39b7f7feb554 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid)
Dec  1 04:59:02 np0005540825 podman[147467]: 2025-12-01 09:59:02.794153742 +0000 UTC m=+0.160331360 container start a060a25c1ececff2ff3cd77abb76c28b7462657d1835ecd953ed39b7f7feb554 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_curran, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec  1 04:59:02 np0005540825 podman[147467]: 2025-12-01 09:59:02.797988955 +0000 UTC m=+0.164166603 container attach a060a25c1ececff2ff3cd77abb76c28b7462657d1835ecd953ed39b7f7feb554 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_curran, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:59:02 np0005540825 recursing_curran[147531]: 167 167
Dec  1 04:59:02 np0005540825 systemd[1]: libpod-a060a25c1ececff2ff3cd77abb76c28b7462657d1835ecd953ed39b7f7feb554.scope: Deactivated successfully.
Dec  1 04:59:02 np0005540825 podman[147467]: 2025-12-01 09:59:02.801654803 +0000 UTC m=+0.167832421 container died a060a25c1ececff2ff3cd77abb76c28b7462657d1835ecd953ed39b7f7feb554 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_curran, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  1 04:59:02 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v235: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 288 B/s rd, 0 op/s
Dec  1 04:59:02 np0005540825 systemd[1]: var-lib-containers-storage-overlay-a278053e8fb3fabaaec80b05427ee3c23725c9eaf6d7d6de80fd86ccf8ff4d31-merged.mount: Deactivated successfully.
Dec  1 04:59:02 np0005540825 podman[147467]: 2025-12-01 09:59:02.846426491 +0000 UTC m=+0.212604099 container remove a060a25c1ececff2ff3cd77abb76c28b7462657d1835ecd953ed39b7f7feb554 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_curran, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 04:59:02 np0005540825 systemd[1]: libpod-conmon-a060a25c1ececff2ff3cd77abb76c28b7462657d1835ecd953ed39b7f7feb554.scope: Deactivated successfully.
Dec  1 04:59:03 np0005540825 podman[147618]: 2025-12-01 09:59:03.060799715 +0000 UTC m=+0.077719740 container create 6b999e565d365b134a2d8ff969fb20a9b814be2284bcfb3617e7a50dab710a4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_borg, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec  1 04:59:03 np0005540825 systemd[1]: Started libpod-conmon-6b999e565d365b134a2d8ff969fb20a9b814be2284bcfb3617e7a50dab710a4b.scope.
Dec  1 04:59:03 np0005540825 podman[147618]: 2025-12-01 09:59:03.033349401 +0000 UTC m=+0.050269546 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 04:59:03 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:59:03 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7d17f86646c64e4771490e19984531d5c9966d696458bd42198eb40013858ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 04:59:03 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7d17f86646c64e4771490e19984531d5c9966d696458bd42198eb40013858ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 04:59:03 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7d17f86646c64e4771490e19984531d5c9966d696458bd42198eb40013858ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 04:59:03 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7d17f86646c64e4771490e19984531d5c9966d696458bd42198eb40013858ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 04:59:03 np0005540825 podman[147618]: 2025-12-01 09:59:03.176002797 +0000 UTC m=+0.192922872 container init 6b999e565d365b134a2d8ff969fb20a9b814be2284bcfb3617e7a50dab710a4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_borg, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 04:59:03 np0005540825 podman[147618]: 2025-12-01 09:59:03.18434727 +0000 UTC m=+0.201267345 container start 6b999e565d365b134a2d8ff969fb20a9b814be2284bcfb3617e7a50dab710a4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec  1 04:59:03 np0005540825 podman[147618]: 2025-12-01 09:59:03.189089827 +0000 UTC m=+0.206009952 container attach 6b999e565d365b134a2d8ff969fb20a9b814be2284bcfb3617e7a50dab710a4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_borg, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  1 04:59:03 np0005540825 python3.9[147635]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:59:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:03 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:59:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:59:03.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:59:03 np0005540825 lvm[147746]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 04:59:03 np0005540825 lvm[147746]: VG ceph_vg0 finished
Dec  1 04:59:03 np0005540825 jolly_borg[147646]: {}
Dec  1 04:59:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:03 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:03 np0005540825 systemd[1]: libpod-6b999e565d365b134a2d8ff969fb20a9b814be2284bcfb3617e7a50dab710a4b.scope: Deactivated successfully.
Dec  1 04:59:03 np0005540825 podman[147618]: 2025-12-01 09:59:03.949741074 +0000 UTC m=+0.966661149 container died 6b999e565d365b134a2d8ff969fb20a9b814be2284bcfb3617e7a50dab710a4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_borg, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  1 04:59:03 np0005540825 systemd[1]: libpod-6b999e565d365b134a2d8ff969fb20a9b814be2284bcfb3617e7a50dab710a4b.scope: Consumed 1.120s CPU time.
Dec  1 04:59:03 np0005540825 systemd[1]: var-lib-containers-storage-overlay-b7d17f86646c64e4771490e19984531d5c9966d696458bd42198eb40013858ac-merged.mount: Deactivated successfully.
Dec  1 04:59:04 np0005540825 podman[147618]: 2025-12-01 09:59:04.009542574 +0000 UTC m=+1.026462649 container remove 6b999e565d365b134a2d8ff969fb20a9b814be2284bcfb3617e7a50dab710a4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_borg, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec  1 04:59:04 np0005540825 systemd[1]: libpod-conmon-6b999e565d365b134a2d8ff969fb20a9b814be2284bcfb3617e7a50dab710a4b.scope: Deactivated successfully.
Dec  1 04:59:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:59:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 04:59:04 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:59:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 04:59:04 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:59:04 np0005540825 python3.9[147914]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:c6:22:5a:f7" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:59:04 np0005540825 ovs-vsctl[147915]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:c6:22:5a:f7 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Dec  1 04:59:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:04 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:59:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:59:04.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:59:04 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v236: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 288 B/s rd, 0 op/s
Dec  1 04:59:05 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:59:05 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 04:59:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:05 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:05 np0005540825 python3.9[148069]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:59:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:59:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:59:05.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:59:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:05 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:06 np0005540825 python3.9[148227]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:59:06 np0005540825 ovs-vsctl[148228]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Dec  1 04:59:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:06 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:59:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:59:06.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:59:06 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v237: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 288 B/s rd, 0 op/s
Dec  1 04:59:06 np0005540825 python3.9[148378]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:59:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:59:07.021Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 04:59:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:07 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:07 np0005540825 python3.9[148534]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:59:07 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:07 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:59:07 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:59:07.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:59:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:07 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:08 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:59:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:59:08.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:59:08 np0005540825 python3.9[148686]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:59:08 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v238: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 288 B/s rd, 0 op/s
Dec  1 04:59:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:59:08.862Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:59:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:59:09 np0005540825 python3.9[148764]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:59:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:09 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 04:59:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 04:59:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:59:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:59:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:59:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:59:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:59:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:59:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:59:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:59:09.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:59:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:09 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:10 np0005540825 python3.9[148918]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:59:10 np0005540825 python3.9[148996]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:59:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:10 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:59:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:59:10.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:59:10 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v239: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  1 04:59:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:59:11] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Dec  1 04:59:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:59:11] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Dec  1 04:59:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:11 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:11 np0005540825 python3.9[149150]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:59:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:59:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:59:11.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:59:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:11 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:12 np0005540825 python3.9[149302]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:59:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:12 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:12 np0005540825 python3.9[149380]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:59:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:59:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:59:12.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:59:12 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v240: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:59:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:13 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:13 np0005540825 python3.9[149534]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:59:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:59:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:59:13.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:59:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:13 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:13 np0005540825 python3.9[149612]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:59:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:59:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:14 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:59:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:59:14.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:59:14 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v241: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:59:14 np0005540825 python3.9[149764]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:59:14 np0005540825 systemd[1]: Reloading.
Dec  1 04:59:14 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:59:14 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:59:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/095915 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 04:59:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:15 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:59:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:59:15.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:59:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:15 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:16 np0005540825 python3.9[149956]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:59:16 np0005540825 python3.9[150034]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:59:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:16 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:59:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:59:16.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:59:16 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v242: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  1 04:59:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:59:17.022Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:59:17 np0005540825 python3.9[150188]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:59:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:17 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:59:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:59:17.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:59:17 np0005540825 python3.9[150267]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:59:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:17 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:18 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:59:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:59:18.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:59:18 np0005540825 python3.9[150444]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:59:18 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v243: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:59:18 np0005540825 systemd[1]: Reloading.
Dec  1 04:59:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:59:18.863Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:59:18 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:59:18 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:59:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:59:19 np0005540825 systemd[1]: Starting Create netns directory...
Dec  1 04:59:19 np0005540825 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  1 04:59:19 np0005540825 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  1 04:59:19 np0005540825 systemd[1]: Finished Create netns directory.
Dec  1 04:59:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:19 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:59:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:59:19.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:59:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:19 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:20 np0005540825 python3.9[150641]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:59:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:20 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:59:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:59:20.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:59:20 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v244: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  1 04:59:20 np0005540825 python3.9[150793]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:59:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:59:21] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Dec  1 04:59:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:59:21] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Dec  1 04:59:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:21 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:21 np0005540825 python3.9[150918]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764583160.370362-1364-32313945884056/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:59:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:59:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:59:21.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:59:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:21 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:22 np0005540825 python3.9[151070]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:59:22 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:22 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:59:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:59:22.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:59:22 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v245: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  1 04:59:23 np0005540825 python3.9[151223]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 04:59:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:23 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:59:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:59:23.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:59:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:23 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:24 np0005540825 python3.9[151347]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764583162.8765998-1439-2385718900018/.source.json _original_basename=.akbr0nw3 follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:59:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:59:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:24 : epoch 692d661d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 04:59:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:24 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 04:59:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 04:59:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:59:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:59:24.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:59:24 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v246: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  1 04:59:25 np0005540825 python3.9[151499]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:59:25 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:25 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:59:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:59:25.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:59:25 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:25 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:26 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:26 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:59:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:59:26.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:59:26 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v247: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec  1 04:59:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:59:27.023Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:59:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:27 : epoch 692d661d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 04:59:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:27 : epoch 692d661d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 04:59:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:27 : epoch 692d661d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 04:59:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:27 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:59:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:59:27.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:59:27 np0005540825 python3.9[151930]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Dec  1 04:59:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:27 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00aa50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:28 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:59:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:59:28.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:59:28 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v248: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec  1 04:59:28 np0005540825 python3.9[152082]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  1 04:59:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:59:28.864Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:59:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:59:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:29 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:29 np0005540825 python3.9[152236]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  1 04:59:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:59:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:59:29.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:59:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:29 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:30 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:30 : epoch 692d661d : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  1 04:59:30 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:30 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00aa50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:59:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:59:30.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:59:30 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v249: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  1 04:59:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:59:31] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Dec  1 04:59:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:59:31] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Dec  1 04:59:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:31 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:59:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:59:31.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:59:31 np0005540825 python3[152416]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  1 04:59:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:31 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:32 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:59:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:59:32.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:59:32 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v250: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  1 04:59:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:33 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00aa50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:59:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:59:33.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:59:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:33 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:34 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:59:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:34 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:59:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:59:34.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:59:34 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v251: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  1 04:59:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/095935 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  1 04:59:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:35 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00aa50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:59:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:59:35.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:59:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:35 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:36 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c00b850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:59:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:59:36.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:59:36 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v252: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  1 04:59:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:59:37.024Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:59:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:37 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:37 np0005540825 podman[152431]: 2025-12-01 09:59:37.61905421 +0000 UTC m=+5.650770258 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec  1 04:59:37 np0005540825 podman[152559]: 2025-12-01 09:59:37.775154715 +0000 UTC m=+0.050827891 container create 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller)
Dec  1 04:59:37 np0005540825 podman[152559]: 2025-12-01 09:59:37.746844687 +0000 UTC m=+0.022517873 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec  1 04:59:37 np0005540825 python3[152416]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec  1 04:59:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:59:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:59:37.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:59:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:37 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:38 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194002740 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:59:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:59:38.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:59:38 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v253: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Dec  1 04:59:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:59:38.865Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 04:59:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:59:38.865Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:59:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:59:39 np0005540825 python3.9[152774]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_09:59:39
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['default.rgw.log', 'images', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', '.nfs', 'vms']
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 04:59:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:39 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c00b850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 04:59:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 04:59:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 04:59:39 np0005540825 python3.9[152930]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:59:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 04:59:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:59:39.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 04:59:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:39 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:40 np0005540825 python3.9[153006]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 04:59:40 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:40 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:59:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:59:40.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:59:40 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v254: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Dec  1 04:59:41 np0005540825 python3.9[153157]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764583180.3837225-1703-142609083628042/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 04:59:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:59:41] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Dec  1 04:59:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:59:41] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Dec  1 04:59:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:41 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194002740 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:59:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:59:41.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:59:41 np0005540825 python3.9[153235]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 04:59:41 np0005540825 systemd[1]: Reloading.
Dec  1 04:59:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:41 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c00b850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:42 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:59:42 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:59:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:42 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:42 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:59:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:59:42.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:59:42 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v255: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec  1 04:59:42 np0005540825 python3.9[153347]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 04:59:42 np0005540825 systemd[1]: Reloading.
Dec  1 04:59:43 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:59:43 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:59:43 np0005540825 systemd[1]: Starting ovn_controller container...
Dec  1 04:59:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:43 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:43 np0005540825 systemd[1]: Started libcrun container.
Dec  1 04:59:43 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb94d4be70d4bac3e7e3865ff1e9fc35651922c046f356bcc655ac014f3a1293/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec  1 04:59:43 np0005540825 systemd[1]: Started /usr/bin/podman healthcheck run 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf.
Dec  1 04:59:43 np0005540825 podman[153389]: 2025-12-01 09:59:43.526083051 +0000 UTC m=+0.170998915 container init 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  1 04:59:43 np0005540825 ovn_controller[153404]: + sudo -E kolla_set_configs
Dec  1 04:59:43 np0005540825 podman[153389]: 2025-12-01 09:59:43.553880935 +0000 UTC m=+0.198796759 container start 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  1 04:59:43 np0005540825 edpm-start-podman-container[153389]: ovn_controller
Dec  1 04:59:43 np0005540825 systemd[1]: Created slice User Slice of UID 0.
Dec  1 04:59:43 np0005540825 systemd[1]: Starting User Runtime Directory /run/user/0...
Dec  1 04:59:43 np0005540825 systemd[1]: Finished User Runtime Directory /run/user/0.
Dec  1 04:59:43 np0005540825 systemd[1]: Starting User Manager for UID 0...
Dec  1 04:59:43 np0005540825 edpm-start-podman-container[153388]: Creating additional drop-in dependency for "ovn_controller" (976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf)
Dec  1 04:59:43 np0005540825 systemd[1]: Reloading.
Dec  1 04:59:43 np0005540825 podman[153411]: 2025-12-01 09:59:43.692727629 +0000 UTC m=+0.128908069 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec  1 04:59:43 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 04:59:43 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 04:59:43 np0005540825 systemd[153443]: Queued start job for default target Main User Target.
Dec  1 04:59:43 np0005540825 systemd[153443]: Created slice User Application Slice.
Dec  1 04:59:43 np0005540825 systemd[153443]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec  1 04:59:43 np0005540825 systemd[153443]: Started Daily Cleanup of User's Temporary Directories.
Dec  1 04:59:43 np0005540825 systemd[153443]: Reached target Paths.
Dec  1 04:59:43 np0005540825 systemd[153443]: Reached target Timers.
Dec  1 04:59:43 np0005540825 systemd[153443]: Starting D-Bus User Message Bus Socket...
Dec  1 04:59:43 np0005540825 systemd[153443]: Starting Create User's Volatile Files and Directories...
Dec  1 04:59:43 np0005540825 systemd[153443]: Finished Create User's Volatile Files and Directories.
Dec  1 04:59:43 np0005540825 systemd[153443]: Listening on D-Bus User Message Bus Socket.
Dec  1 04:59:43 np0005540825 systemd[153443]: Reached target Sockets.
Dec  1 04:59:43 np0005540825 systemd[153443]: Reached target Basic System.
Dec  1 04:59:43 np0005540825 systemd[153443]: Reached target Main User Target.
Dec  1 04:59:43 np0005540825 systemd[153443]: Startup finished in 175ms.
Dec  1 04:59:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:59:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:59:43.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:59:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:43 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194002740 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:43 np0005540825 systemd[1]: Started User Manager for UID 0.
Dec  1 04:59:43 np0005540825 systemd[1]: Started ovn_controller container.
Dec  1 04:59:43 np0005540825 systemd[1]: 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf-225046d35ea54af0.service: Main process exited, code=exited, status=1/FAILURE
Dec  1 04:59:43 np0005540825 systemd[1]: 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf-225046d35ea54af0.service: Failed with result 'exit-code'.
Dec  1 04:59:43 np0005540825 systemd[1]: Started Session c1 of User root.
Dec  1 04:59:44 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: INFO:__main__:Validating config file
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: INFO:__main__:Writing out command to execute
Dec  1 04:59:44 np0005540825 systemd[1]: session-c1.scope: Deactivated successfully.
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: ++ cat /run_command
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: + ARGS=
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: + sudo kolla_copy_cacerts
Dec  1 04:59:44 np0005540825 systemd[1]: Started Session c2 of User root.
Dec  1 04:59:44 np0005540825 systemd[1]: session-c2.scope: Deactivated successfully.
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: + [[ ! -n '' ]]
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: + . kolla_extend_start
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: + umask 0022
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: 2025-12-01T09:59:44Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: 2025-12-01T09:59:44Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: 2025-12-01T09:59:44Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: 2025-12-01T09:59:44Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: 2025-12-01T09:59:44Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: 2025-12-01T09:59:44Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Dec  1 04:59:44 np0005540825 NetworkManager[48963]: <info>  [1764583184.1689] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Dec  1 04:59:44 np0005540825 NetworkManager[48963]: <info>  [1764583184.1700] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 04:59:44 np0005540825 NetworkManager[48963]: <info>  [1764583184.1712] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Dec  1 04:59:44 np0005540825 NetworkManager[48963]: <info>  [1764583184.1717] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Dec  1 04:59:44 np0005540825 NetworkManager[48963]: <info>  [1764583184.1721] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec  1 04:59:44 np0005540825 kernel: br-int: entered promiscuous mode
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: 2025-12-01T09:59:44Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: 2025-12-01T09:59:44Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: 2025-12-01T09:59:44Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: 2025-12-01T09:59:44Z|00010|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: 2025-12-01T09:59:44Z|00011|features|INFO|OVS Feature: ct_zero_snat, state: supported
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: 2025-12-01T09:59:44Z|00012|features|INFO|OVS Feature: ct_flush, state: supported
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: 2025-12-01T09:59:44Z|00013|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: 2025-12-01T09:59:44Z|00014|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: 2025-12-01T09:59:44Z|00015|main|INFO|OVS feature set changed, force recompute.
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: 2025-12-01T09:59:44Z|00016|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: 2025-12-01T09:59:44Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: 2025-12-01T09:59:44Z|00018|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: 2025-12-01T09:59:44Z|00019|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: 2025-12-01T09:59:44Z|00020|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: 2025-12-01T09:59:44Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: 2025-12-01T09:59:44Z|00022|main|INFO|OVS feature set changed, force recompute.
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: 2025-12-01T09:59:44Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: 2025-12-01T09:59:44Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: 2025-12-01T09:59:44Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: 2025-12-01T09:59:44Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: 2025-12-01T09:59:44Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: 2025-12-01T09:59:44Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: 2025-12-01T09:59:44Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  1 04:59:44 np0005540825 ovn_controller[153404]: 2025-12-01T09:59:44Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  1 04:59:44 np0005540825 NetworkManager[48963]: <info>  [1764583184.1910] manager: (ovn-9a0c85-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Dec  1 04:59:44 np0005540825 NetworkManager[48963]: <info>  [1764583184.1914] manager: (ovn-968d9d-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Dec  1 04:59:44 np0005540825 kernel: genev_sys_6081: entered promiscuous mode
Dec  1 04:59:44 np0005540825 NetworkManager[48963]: <info>  [1764583184.2083] device (genev_sys_6081): carrier: link connected
Dec  1 04:59:44 np0005540825 NetworkManager[48963]: <info>  [1764583184.2087] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/21)
Dec  1 04:59:44 np0005540825 systemd-udevd[153538]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 04:59:44 np0005540825 systemd-udevd[153542]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 04:59:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:44 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c00b850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:59:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:59:44.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:59:44 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v256: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec  1 04:59:45 np0005540825 NetworkManager[48963]: <info>  [1764583185.2518] manager: (ovn-b99910-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Dec  1 04:59:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:45 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:45 np0005540825 python3.9[153675]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:59:45 np0005540825 ovs-vsctl[153676]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Dec  1 04:59:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.002000054s ======
Dec  1 04:59:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:59:45.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec  1 04:59:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:45 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:46 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11940018d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 04:59:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:59:46.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 04:59:46 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v257: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec  1 04:59:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:59:47.026Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:59:47 np0005540825 python3.9[153829]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:59:47 np0005540825 ovs-vsctl[153831]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Dec  1 04:59:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:47 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c00b850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 04:59:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:59:47.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 04:59:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:47 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:48 np0005540825 python3.9[153985]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 04:59:48 np0005540825 ovs-vsctl[153986]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Dec  1 04:59:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:48 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 04:59:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:59:48.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 04:59:48 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v258: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  1 04:59:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/095948 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 04:59:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:59:48.866Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:59:48 np0005540825 systemd[1]: session-50.scope: Deactivated successfully.
Dec  1 04:59:48 np0005540825 systemd[1]: session-50.scope: Consumed 1min 4.783s CPU time.
Dec  1 04:59:48 np0005540825 systemd-logind[789]: Session 50 logged out. Waiting for processes to exit.
Dec  1 04:59:48 np0005540825 systemd-logind[789]: Removed session 50.
Dec  1 04:59:49 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:59:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/095949 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 04:59:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:49 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11940018d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:59:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:59:49.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:59:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:49 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c00b850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:50 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:50 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c00b850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:59:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:59:50.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:59:50 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v259: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 04:59:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:09:59:51] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Dec  1 04:59:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:09:59:51] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Dec  1 04:59:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:51 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c00b850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:59:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:59:51.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:59:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:51 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c00b850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:52 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:59:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:59:52.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:59:52 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v260: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec  1 04:59:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:53 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00aa50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:59:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:59:53.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:59:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:53 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c00b850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:59:54 np0005540825 systemd[1]: Stopping User Manager for UID 0...
Dec  1 04:59:54 np0005540825 systemd[153443]: Activating special unit Exit the Session...
Dec  1 04:59:54 np0005540825 systemd[153443]: Stopped target Main User Target.
Dec  1 04:59:54 np0005540825 systemd[153443]: Stopped target Basic System.
Dec  1 04:59:54 np0005540825 systemd[153443]: Stopped target Paths.
Dec  1 04:59:54 np0005540825 systemd[153443]: Stopped target Sockets.
Dec  1 04:59:54 np0005540825 systemd[153443]: Stopped target Timers.
Dec  1 04:59:54 np0005540825 systemd[153443]: Stopped Daily Cleanup of User's Temporary Directories.
Dec  1 04:59:54 np0005540825 systemd[153443]: Closed D-Bus User Message Bus Socket.
Dec  1 04:59:54 np0005540825 systemd[153443]: Stopped Create User's Volatile Files and Directories.
Dec  1 04:59:54 np0005540825 systemd[153443]: Removed slice User Application Slice.
Dec  1 04:59:54 np0005540825 systemd[153443]: Reached target Shutdown.
Dec  1 04:59:54 np0005540825 systemd[153443]: Finished Exit the Session.
Dec  1 04:59:54 np0005540825 systemd[153443]: Reached target Exit the Session.
Dec  1 04:59:54 np0005540825 systemd[1]: user@0.service: Deactivated successfully.
Dec  1 04:59:54 np0005540825 systemd[1]: Stopped User Manager for UID 0.
Dec  1 04:59:54 np0005540825 systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec  1 04:59:54 np0005540825 systemd[1]: run-user-0.mount: Deactivated successfully.
Dec  1 04:59:54 np0005540825 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec  1 04:59:54 np0005540825 systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec  1 04:59:54 np0005540825 systemd[1]: Removed slice User Slice of UID 0.
Dec  1 04:59:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 04:59:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 04:59:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:59:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:59:54.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:59:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:54 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:54 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v261: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec  1 04:59:54 np0005540825 systemd-logind[789]: New session 52 of user zuul.
Dec  1 04:59:54 np0005540825 systemd[1]: Started Session 52 of User zuul.
Dec  1 04:59:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:55 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:59:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:59:55.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:59:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:55 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00aa50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:55 np0005540825 python3.9[154175]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 04:59:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:56 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c00b850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 04:59:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:59:56.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 04:59:56 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v262: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec  1 04:59:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:59:57.026Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 04:59:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:59:57.027Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 04:59:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:59:57.027Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 04:59:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:57 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:57 np0005540825 python3.9[154333]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:59:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 04:59:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:59:57.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 04:59:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:57 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:58 np0005540825 python3.9[154508]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:59:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:58 : epoch 692d661d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 04:59:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 04:59:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:58 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00aa50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:09:59:58.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 04:59:58 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v263: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec  1 04:59:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T09:59:58.868Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 04:59:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 04:59:59 np0005540825 python3.9[154662]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:59:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:59 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c00b850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 04:59:59 np0005540825 python3.9[154816]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 04:59:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 04:59:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 04:59:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:09:59:59.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 04:59:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 09:59:59 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:00 np0005540825 ceph-mon[74416]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore; 1 failed cephadm daemon(s)
Dec  1 05:00:00 np0005540825 ceph-mon[74416]: log_channel(cluster) log [WRN] : [WRN] BLUESTORE_SLOW_OP_ALERT: 1 OSD(s) experiencing slow operations in BlueStore
Dec  1 05:00:00 np0005540825 ceph-mon[74416]: log_channel(cluster) log [WRN] :      osd.2 observed slow operation indications in BlueStore
Dec  1 05:00:00 np0005540825 ceph-mon[74416]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Dec  1 05:00:00 np0005540825 ceph-mon[74416]: log_channel(cluster) log [WRN] :     daemon nfs.cephfs.2.0.compute-0.pytvsu on compute-0 is in unknown state
Dec  1 05:00:00 np0005540825 python3.9[154968]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:00:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:00:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:00:00.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:00:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:00 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:00 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v264: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 2 op/s
Dec  1 05:00:01 np0005540825 ceph-mon[74416]: Health detail: HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore; 1 failed cephadm daemon(s)
Dec  1 05:00:01 np0005540825 ceph-mon[74416]: [WRN] BLUESTORE_SLOW_OP_ALERT: 1 OSD(s) experiencing slow operations in BlueStore
Dec  1 05:00:01 np0005540825 ceph-mon[74416]:     osd.2 observed slow operation indications in BlueStore
Dec  1 05:00:01 np0005540825 ceph-mon[74416]: [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Dec  1 05:00:01 np0005540825 ceph-mon[74416]:    daemon nfs.cephfs.2.0.compute-0.pytvsu on compute-0 is in unknown state
Dec  1 05:00:01 np0005540825 python3.9[155118]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 05:00:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:00:01] "GET /metrics HTTP/1.1" 200 48417 "" "Prometheus/2.51.0"
Dec  1 05:00:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:00:01] "GET /metrics HTTP/1.1" 200 48417 "" "Prometheus/2.51.0"
Dec  1 05:00:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:01 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:01 : epoch 692d661d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:00:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:01 : epoch 692d661d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:00:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:00:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:00:01.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:00:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:01 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c00b850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:02 np0005540825 python3.9[155272]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec  1 05:00:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:02 : epoch 692d661d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:00:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:02 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:00:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:00:02.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:00:02 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v265: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec  1 05:00:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:03 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:00:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:00:03.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:00:03 np0005540825 python3.9[155425]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:00:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:03 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:00:04 np0005540825 python3.9[155546]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764583203.2481122-218-167632135096886/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:00:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:04 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c00b850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:00:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:00:04.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:00:04 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v266: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec  1 05:00:05 np0005540825 python3.9[155766]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:00:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:00:05 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:00:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 05:00:05 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:00:05 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v267: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 680 B/s wr, 2 op/s
Dec  1 05:00:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 05:00:05 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:00:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 05:00:05 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:00:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 05:00:05 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 05:00:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 05:00:05 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:00:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:00:05 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:00:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:05 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:05 : epoch 692d661d : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  1 05:00:05 np0005540825 python3.9[155950]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764583204.8078496-263-269803635652455/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:00:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:00:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:00:05.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:00:05 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:00:05 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:00:05 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:00:05 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:00:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:05 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:06 np0005540825 podman[156018]: 2025-12-01 10:00:06.128938474 +0000 UTC m=+0.072557004 container create 6bc13a5ad479a8275e103e607ef9c78e3c08c8928a9d7bff4eb4f681bbdc8944 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_engelbart, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:00:06 np0005540825 systemd[1]: Started libpod-conmon-6bc13a5ad479a8275e103e607ef9c78e3c08c8928a9d7bff4eb4f681bbdc8944.scope.
Dec  1 05:00:06 np0005540825 podman[156018]: 2025-12-01 10:00:06.095736056 +0000 UTC m=+0.039354636 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:00:06 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:00:06 np0005540825 podman[156018]: 2025-12-01 10:00:06.237725497 +0000 UTC m=+0.181344067 container init 6bc13a5ad479a8275e103e607ef9c78e3c08c8928a9d7bff4eb4f681bbdc8944 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_engelbart, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  1 05:00:06 np0005540825 podman[156018]: 2025-12-01 10:00:06.246790423 +0000 UTC m=+0.190408923 container start 6bc13a5ad479a8275e103e607ef9c78e3c08c8928a9d7bff4eb4f681bbdc8944 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_engelbart, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:00:06 np0005540825 podman[156018]: 2025-12-01 10:00:06.250671118 +0000 UTC m=+0.194289638 container attach 6bc13a5ad479a8275e103e607ef9c78e3c08c8928a9d7bff4eb4f681bbdc8944 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_engelbart, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Dec  1 05:00:06 np0005540825 competent_engelbart[156035]: 167 167
Dec  1 05:00:06 np0005540825 systemd[1]: libpod-6bc13a5ad479a8275e103e607ef9c78e3c08c8928a9d7bff4eb4f681bbdc8944.scope: Deactivated successfully.
Dec  1 05:00:06 np0005540825 podman[156018]: 2025-12-01 10:00:06.256961668 +0000 UTC m=+0.200580268 container died 6bc13a5ad479a8275e103e607ef9c78e3c08c8928a9d7bff4eb4f681bbdc8944 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_engelbart, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  1 05:00:06 np0005540825 systemd[1]: var-lib-containers-storage-overlay-30f0681ace2ebf8f03095dd6e3a871db5638650bee50d9d70307ba105d4e598f-merged.mount: Deactivated successfully.
Dec  1 05:00:06 np0005540825 podman[156018]: 2025-12-01 10:00:06.306673053 +0000 UTC m=+0.250291553 container remove 6bc13a5ad479a8275e103e607ef9c78e3c08c8928a9d7bff4eb4f681bbdc8944 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_engelbart, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:00:06 np0005540825 systemd[1]: libpod-conmon-6bc13a5ad479a8275e103e607ef9c78e3c08c8928a9d7bff4eb4f681bbdc8944.scope: Deactivated successfully.
Dec  1 05:00:06 np0005540825 podman[156152]: 2025-12-01 10:00:06.56716412 +0000 UTC m=+0.066531331 container create 6bc5f1555692523a13d8e01f7cbacc58e35be8abe051adca7749bc25c396d718 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  1 05:00:06 np0005540825 systemd[1]: Started libpod-conmon-6bc5f1555692523a13d8e01f7cbacc58e35be8abe051adca7749bc25c396d718.scope.
Dec  1 05:00:06 np0005540825 podman[156152]: 2025-12-01 10:00:06.545217906 +0000 UTC m=+0.044585167 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:00:06 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:00:06 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b65db5e61368d5abd6cd34e6ba78d21fc722f5b46247d6b32df9352c7cfd2212/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:00:06 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b65db5e61368d5abd6cd34e6ba78d21fc722f5b46247d6b32df9352c7cfd2212/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:00:06 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b65db5e61368d5abd6cd34e6ba78d21fc722f5b46247d6b32df9352c7cfd2212/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:00:06 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b65db5e61368d5abd6cd34e6ba78d21fc722f5b46247d6b32df9352c7cfd2212/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:00:06 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b65db5e61368d5abd6cd34e6ba78d21fc722f5b46247d6b32df9352c7cfd2212/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:00:06 np0005540825 podman[156152]: 2025-12-01 10:00:06.659655032 +0000 UTC m=+0.159022253 container init 6bc5f1555692523a13d8e01f7cbacc58e35be8abe051adca7749bc25c396d718 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  1 05:00:06 np0005540825 podman[156152]: 2025-12-01 10:00:06.669592111 +0000 UTC m=+0.168959352 container start 6bc5f1555692523a13d8e01f7cbacc58e35be8abe051adca7749bc25c396d718 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec  1 05:00:06 np0005540825 podman[156152]: 2025-12-01 10:00:06.673113927 +0000 UTC m=+0.172481148 container attach 6bc5f1555692523a13d8e01f7cbacc58e35be8abe051adca7749bc25c396d718 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_feistel, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  1 05:00:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:00:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:00:06.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:00:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:06 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00aa50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:06 np0005540825 python3.9[156203]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 05:00:07 np0005540825 vigorous_feistel[156204]: --> passed data devices: 0 physical, 1 LVM
Dec  1 05:00:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:00:07.028Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:00:07 np0005540825 vigorous_feistel[156204]: --> All data devices are unavailable
Dec  1 05:00:07 np0005540825 systemd[1]: libpod-6bc5f1555692523a13d8e01f7cbacc58e35be8abe051adca7749bc25c396d718.scope: Deactivated successfully.
Dec  1 05:00:07 np0005540825 podman[156224]: 2025-12-01 10:00:07.098112236 +0000 UTC m=+0.029772247 container died 6bc5f1555692523a13d8e01f7cbacc58e35be8abe051adca7749bc25c396d718 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_feistel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec  1 05:00:07 np0005540825 systemd[1]: var-lib-containers-storage-overlay-b65db5e61368d5abd6cd34e6ba78d21fc722f5b46247d6b32df9352c7cfd2212-merged.mount: Deactivated successfully.
Dec  1 05:00:07 np0005540825 podman[156224]: 2025-12-01 10:00:07.148099458 +0000 UTC m=+0.079759379 container remove 6bc5f1555692523a13d8e01f7cbacc58e35be8abe051adca7749bc25c396d718 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:00:07 np0005540825 systemd[1]: libpod-conmon-6bc5f1555692523a13d8e01f7cbacc58e35be8abe051adca7749bc25c396d718.scope: Deactivated successfully.
Dec  1 05:00:07 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v268: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Dec  1 05:00:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:07 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c00b850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:07 np0005540825 podman[156410]: 2025-12-01 10:00:07.741440812 +0000 UTC m=+0.052917523 container create 1f06af9d64b683233228499312de2faa002ffd819efc0e9dcfb631c018ce5ef3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:00:07 np0005540825 systemd[1]: Started libpod-conmon-1f06af9d64b683233228499312de2faa002ffd819efc0e9dcfb631c018ce5ef3.scope.
Dec  1 05:00:07 np0005540825 podman[156410]: 2025-12-01 10:00:07.71290769 +0000 UTC m=+0.024384491 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:00:07 np0005540825 python3.9[156391]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 05:00:07 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:00:07 np0005540825 podman[156410]: 2025-12-01 10:00:07.852676641 +0000 UTC m=+0.164153432 container init 1f06af9d64b683233228499312de2faa002ffd819efc0e9dcfb631c018ce5ef3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default)
Dec  1 05:00:07 np0005540825 podman[156410]: 2025-12-01 10:00:07.864369698 +0000 UTC m=+0.175846439 container start 1f06af9d64b683233228499312de2faa002ffd819efc0e9dcfb631c018ce5ef3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:00:07 np0005540825 podman[156410]: 2025-12-01 10:00:07.869125346 +0000 UTC m=+0.180602107 container attach 1f06af9d64b683233228499312de2faa002ffd819efc0e9dcfb631c018ce5ef3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:00:07 np0005540825 loving_mclaren[156427]: 167 167
Dec  1 05:00:07 np0005540825 systemd[1]: libpod-1f06af9d64b683233228499312de2faa002ffd819efc0e9dcfb631c018ce5ef3.scope: Deactivated successfully.
Dec  1 05:00:07 np0005540825 conmon[156427]: conmon 1f06af9d64b683233228 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1f06af9d64b683233228499312de2faa002ffd819efc0e9dcfb631c018ce5ef3.scope/container/memory.events
Dec  1 05:00:07 np0005540825 podman[156410]: 2025-12-01 10:00:07.873490195 +0000 UTC m=+0.184966966 container died 1f06af9d64b683233228499312de2faa002ffd819efc0e9dcfb631c018ce5ef3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_mclaren, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:00:07 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:07 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:00:07 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:00:07.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:00:07 np0005540825 systemd[1]: var-lib-containers-storage-overlay-5fb70993835129c338dd4383043d310e1862413013e9a5625ab588b9e5ed223d-merged.mount: Deactivated successfully.
Dec  1 05:00:07 np0005540825 podman[156410]: 2025-12-01 10:00:07.925096121 +0000 UTC m=+0.236572842 container remove 1f06af9d64b683233228499312de2faa002ffd819efc0e9dcfb631c018ce5ef3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_mclaren, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  1 05:00:07 np0005540825 systemd[1]: libpod-conmon-1f06af9d64b683233228499312de2faa002ffd819efc0e9dcfb631c018ce5ef3.scope: Deactivated successfully.
Dec  1 05:00:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:07 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:08 np0005540825 podman[156452]: 2025-12-01 10:00:08.096283273 +0000 UTC m=+0.042610644 container create 63ef6909572c6d9c95ec3fa79d58d76ecc801d75243a53231ad39760184f9bc4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_booth, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:00:08 np0005540825 systemd[1]: Started libpod-conmon-63ef6909572c6d9c95ec3fa79d58d76ecc801d75243a53231ad39760184f9bc4.scope.
Dec  1 05:00:08 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:00:08 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8a965da698c8ec10391c461d08572d8028c9ad0678a870298dba9f150cc3f76/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:00:08 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8a965da698c8ec10391c461d08572d8028c9ad0678a870298dba9f150cc3f76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:00:08 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8a965da698c8ec10391c461d08572d8028c9ad0678a870298dba9f150cc3f76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:00:08 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8a965da698c8ec10391c461d08572d8028c9ad0678a870298dba9f150cc3f76/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:00:08 np0005540825 podman[156452]: 2025-12-01 10:00:08.076920239 +0000 UTC m=+0.023247660 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:00:08 np0005540825 podman[156452]: 2025-12-01 10:00:08.175456845 +0000 UTC m=+0.121784236 container init 63ef6909572c6d9c95ec3fa79d58d76ecc801d75243a53231ad39760184f9bc4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_booth, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  1 05:00:08 np0005540825 podman[156452]: 2025-12-01 10:00:08.184344695 +0000 UTC m=+0.130672066 container start 63ef6909572c6d9c95ec3fa79d58d76ecc801d75243a53231ad39760184f9bc4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_booth, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  1 05:00:08 np0005540825 podman[156452]: 2025-12-01 10:00:08.187742177 +0000 UTC m=+0.134069548 container attach 63ef6909572c6d9c95ec3fa79d58d76ecc801d75243a53231ad39760184f9bc4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_booth, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec  1 05:00:08 np0005540825 wizardly_booth[156468]: {
Dec  1 05:00:08 np0005540825 wizardly_booth[156468]:    "1": [
Dec  1 05:00:08 np0005540825 wizardly_booth[156468]:        {
Dec  1 05:00:08 np0005540825 wizardly_booth[156468]:            "devices": [
Dec  1 05:00:08 np0005540825 wizardly_booth[156468]:                "/dev/loop3"
Dec  1 05:00:08 np0005540825 wizardly_booth[156468]:            ],
Dec  1 05:00:08 np0005540825 wizardly_booth[156468]:            "lv_name": "ceph_lv0",
Dec  1 05:00:08 np0005540825 wizardly_booth[156468]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:00:08 np0005540825 wizardly_booth[156468]:            "lv_size": "21470642176",
Dec  1 05:00:08 np0005540825 wizardly_booth[156468]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 05:00:08 np0005540825 wizardly_booth[156468]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:00:08 np0005540825 wizardly_booth[156468]:            "name": "ceph_lv0",
Dec  1 05:00:08 np0005540825 wizardly_booth[156468]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:00:08 np0005540825 wizardly_booth[156468]:            "tags": {
Dec  1 05:00:08 np0005540825 wizardly_booth[156468]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:00:08 np0005540825 wizardly_booth[156468]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:00:08 np0005540825 wizardly_booth[156468]:                "ceph.cephx_lockbox_secret": "",
Dec  1 05:00:08 np0005540825 wizardly_booth[156468]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 05:00:08 np0005540825 wizardly_booth[156468]:                "ceph.cluster_name": "ceph",
Dec  1 05:00:08 np0005540825 wizardly_booth[156468]:                "ceph.crush_device_class": "",
Dec  1 05:00:08 np0005540825 wizardly_booth[156468]:                "ceph.encrypted": "0",
Dec  1 05:00:08 np0005540825 wizardly_booth[156468]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 05:00:08 np0005540825 wizardly_booth[156468]:                "ceph.osd_id": "1",
Dec  1 05:00:08 np0005540825 wizardly_booth[156468]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 05:00:08 np0005540825 wizardly_booth[156468]:                "ceph.type": "block",
Dec  1 05:00:08 np0005540825 wizardly_booth[156468]:                "ceph.vdo": "0",
Dec  1 05:00:08 np0005540825 wizardly_booth[156468]:                "ceph.with_tpm": "0"
Dec  1 05:00:08 np0005540825 wizardly_booth[156468]:            },
Dec  1 05:00:08 np0005540825 wizardly_booth[156468]:            "type": "block",
Dec  1 05:00:08 np0005540825 wizardly_booth[156468]:            "vg_name": "ceph_vg0"
Dec  1 05:00:08 np0005540825 wizardly_booth[156468]:        }
Dec  1 05:00:08 np0005540825 wizardly_booth[156468]:    ]
Dec  1 05:00:08 np0005540825 wizardly_booth[156468]: }
Dec  1 05:00:08 np0005540825 systemd[1]: libpod-63ef6909572c6d9c95ec3fa79d58d76ecc801d75243a53231ad39760184f9bc4.scope: Deactivated successfully.
Dec  1 05:00:08 np0005540825 podman[156452]: 2025-12-01 10:00:08.508119525 +0000 UTC m=+0.454446896 container died 63ef6909572c6d9c95ec3fa79d58d76ecc801d75243a53231ad39760184f9bc4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_booth, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:00:08 np0005540825 systemd[1]: var-lib-containers-storage-overlay-d8a965da698c8ec10391c461d08572d8028c9ad0678a870298dba9f150cc3f76-merged.mount: Deactivated successfully.
Dec  1 05:00:08 np0005540825 podman[156452]: 2025-12-01 10:00:08.549764162 +0000 UTC m=+0.496091533 container remove 63ef6909572c6d9c95ec3fa79d58d76ecc801d75243a53231ad39760184f9bc4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_booth, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  1 05:00:08 np0005540825 systemd[1]: libpod-conmon-63ef6909572c6d9c95ec3fa79d58d76ecc801d75243a53231ad39760184f9bc4.scope: Deactivated successfully.
Dec  1 05:00:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:00:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:08 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:00:08.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:00:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:00:08.869Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:00:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:00:09 np0005540825 podman[156581]: 2025-12-01 10:00:09.210054237 +0000 UTC m=+0.070637202 container create 209175e93148db426885656c831f8d04e3153827b1b0d4322d90f8176274ebea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:00:09 np0005540825 systemd[1]: Started libpod-conmon-209175e93148db426885656c831f8d04e3153827b1b0d4322d90f8176274ebea.scope.
Dec  1 05:00:09 np0005540825 podman[156581]: 2025-12-01 10:00:09.183687824 +0000 UTC m=+0.044270869 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:00:09 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:00:09 np0005540825 podman[156581]: 2025-12-01 10:00:09.307798562 +0000 UTC m=+0.168381627 container init 209175e93148db426885656c831f8d04e3153827b1b0d4322d90f8176274ebea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_archimedes, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  1 05:00:09 np0005540825 podman[156581]: 2025-12-01 10:00:09.317005981 +0000 UTC m=+0.177588936 container start 209175e93148db426885656c831f8d04e3153827b1b0d4322d90f8176274ebea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:00:09 np0005540825 podman[156581]: 2025-12-01 10:00:09.321729459 +0000 UTC m=+0.182312464 container attach 209175e93148db426885656c831f8d04e3153827b1b0d4322d90f8176274ebea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True)
Dec  1 05:00:09 np0005540825 vibrant_archimedes[156602]: 167 167
Dec  1 05:00:09 np0005540825 systemd[1]: libpod-209175e93148db426885656c831f8d04e3153827b1b0d4322d90f8176274ebea.scope: Deactivated successfully.
Dec  1 05:00:09 np0005540825 podman[156581]: 2025-12-01 10:00:09.324813832 +0000 UTC m=+0.185396827 container died 209175e93148db426885656c831f8d04e3153827b1b0d4322d90f8176274ebea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_archimedes, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:00:09 np0005540825 systemd[1]: var-lib-containers-storage-overlay-624b07ef4ac6ade657dd4b65f4cf264b649fedef8418b54fa3f14788ec5b083f-merged.mount: Deactivated successfully.
Dec  1 05:00:09 np0005540825 podman[156581]: 2025-12-01 10:00:09.364046474 +0000 UTC m=+0.224629429 container remove 209175e93148db426885656c831f8d04e3153827b1b0d4322d90f8176274ebea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_archimedes, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:00:09 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v269: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Dec  1 05:00:09 np0005540825 systemd[1]: libpod-conmon-209175e93148db426885656c831f8d04e3153827b1b0d4322d90f8176274ebea.scope: Deactivated successfully.
Dec  1 05:00:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:09 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00aa50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:00:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:00:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:00:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:00:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:00:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:00:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:00:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:00:09 np0005540825 podman[156646]: 2025-12-01 10:00:09.595148327 +0000 UTC m=+0.076406669 container create 0ad4f29b8dce0e92aebfa6851823c7d4c93e44f15ca88961c7705061e75c8e0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:00:09 np0005540825 systemd[1]: Started libpod-conmon-0ad4f29b8dce0e92aebfa6851823c7d4c93e44f15ca88961c7705061e75c8e0e.scope.
Dec  1 05:00:09 np0005540825 podman[156646]: 2025-12-01 10:00:09.561138836 +0000 UTC m=+0.042397228 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:00:09 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:00:09 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0c4261a2e76ccaa7471403b1c2bb3707940186973dad22d6c6a8ce9e405e52a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:00:09 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0c4261a2e76ccaa7471403b1c2bb3707940186973dad22d6c6a8ce9e405e52a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:00:09 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0c4261a2e76ccaa7471403b1c2bb3707940186973dad22d6c6a8ce9e405e52a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:00:09 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0c4261a2e76ccaa7471403b1c2bb3707940186973dad22d6c6a8ce9e405e52a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:00:09 np0005540825 podman[156646]: 2025-12-01 10:00:09.712203864 +0000 UTC m=+0.193462246 container init 0ad4f29b8dce0e92aebfa6851823c7d4c93e44f15ca88961c7705061e75c8e0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:00:09 np0005540825 podman[156646]: 2025-12-01 10:00:09.722878673 +0000 UTC m=+0.204137005 container start 0ad4f29b8dce0e92aebfa6851823c7d4c93e44f15ca88961c7705061e75c8e0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_thompson, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec  1 05:00:09 np0005540825 podman[156646]: 2025-12-01 10:00:09.727401865 +0000 UTC m=+0.208660207 container attach 0ad4f29b8dce0e92aebfa6851823c7d4c93e44f15ca88961c7705061e75c8e0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_thompson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  1 05:00:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:00:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:00:09.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:00:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:09 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c00b850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:10 np0005540825 lvm[156736]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 05:00:10 np0005540825 lvm[156736]: VG ceph_vg0 finished
Dec  1 05:00:10 np0005540825 lvm[156740]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 05:00:10 np0005540825 lvm[156740]: VG ceph_vg0 finished
Dec  1 05:00:10 np0005540825 relaxed_thompson[156662]: {}
Dec  1 05:00:10 np0005540825 systemd[1]: libpod-0ad4f29b8dce0e92aebfa6851823c7d4c93e44f15ca88961c7705061e75c8e0e.scope: Deactivated successfully.
Dec  1 05:00:10 np0005540825 systemd[1]: libpod-0ad4f29b8dce0e92aebfa6851823c7d4c93e44f15ca88961c7705061e75c8e0e.scope: Consumed 1.368s CPU time.
Dec  1 05:00:10 np0005540825 podman[156646]: 2025-12-01 10:00:10.56440944 +0000 UTC m=+1.045667752 container died 0ad4f29b8dce0e92aebfa6851823c7d4c93e44f15ca88961c7705061e75c8e0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  1 05:00:10 np0005540825 systemd[1]: var-lib-containers-storage-overlay-b0c4261a2e76ccaa7471403b1c2bb3707940186973dad22d6c6a8ce9e405e52a-merged.mount: Deactivated successfully.
Dec  1 05:00:10 np0005540825 podman[156646]: 2025-12-01 10:00:10.617111986 +0000 UTC m=+1.098370308 container remove 0ad4f29b8dce0e92aebfa6851823c7d4c93e44f15ca88961c7705061e75c8e0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_thompson, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  1 05:00:10 np0005540825 systemd[1]: libpod-conmon-0ad4f29b8dce0e92aebfa6851823c7d4c93e44f15ca88961c7705061e75c8e0e.scope: Deactivated successfully.
Dec  1 05:00:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 05:00:10 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:00:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 05:00:10 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:00:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:10 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:00:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:00:10.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:00:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/100010 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  1 05:00:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:00:11] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Dec  1 05:00:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:00:11] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Dec  1 05:00:11 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v270: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 582 B/s wr, 2 op/s
Dec  1 05:00:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/100011 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  1 05:00:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:11 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:11 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:00:11 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:00:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:00:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:00:11.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:00:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:11 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00aa50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:11 np0005540825 python3.9[156907]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  1 05:00:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:12 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c00b850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:00:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:00:12.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:00:12 np0005540825 python3.9[157060]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:00:13 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v271: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 582 B/s wr, 2 op/s
Dec  1 05:00:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:13 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:13 np0005540825 python3.9[157183]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764583212.352666-374-205632857534029/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:00:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:00:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:00:13.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:00:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:13 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:00:14 np0005540825 ovn_controller[153404]: 2025-12-01T10:00:14Z|00025|memory|INFO|16000 kB peak resident set size after 30.1 seconds
Dec  1 05:00:14 np0005540825 ovn_controller[153404]: 2025-12-01T10:00:14Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Dec  1 05:00:14 np0005540825 podman[157307]: 2025-12-01 10:00:14.245800495 +0000 UTC m=+0.136031942 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 05:00:14 np0005540825 python3.9[157345]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:00:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:14 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00aa50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:00:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:00:14.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:00:14 np0005540825 python3.9[157480]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764583213.780958-374-198623118542518/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:00:15 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v272: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 582 B/s wr, 2 op/s
Dec  1 05:00:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:15 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c00b850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:00:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:00:15.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:00:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:15 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:16 np0005540825 python3.9[157632]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:00:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:16 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:00:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:00:16.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:00:16 np0005540825 python3.9[157753]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764583215.684851-506-54219048992795/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:00:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:00:17.028Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:00:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:00:17.028Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:00:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:00:17.029Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:00:17 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v273: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 511 B/s wr, 2 op/s
Dec  1 05:00:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:17 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00aa50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:17 np0005540825 python3.9[157905]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:00:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:00:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:00:17.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:00:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:17 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c00b850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:18 np0005540825 python3.9[158026]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764583217.060609-506-51820695248330/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:00:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:18 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:00:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:00:18.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:00:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:00:18.871Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:00:18 np0005540825 python3.9[158201]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 05:00:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:00:19 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v274: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec  1 05:00:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:19 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:19 np0005540825 python3.9[158357]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:00:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:00:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:00:19.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:00:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:19 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00aa50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:20 np0005540825 python3.9[158509]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:00:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:20 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c00b850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:00:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:00:20.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:00:21 np0005540825 python3.9[158587]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:00:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:00:21] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Dec  1 05:00:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:00:21] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Dec  1 05:00:21 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v275: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 B/s wr, 0 op/s
Dec  1 05:00:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:21 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:21 np0005540825 python3.9[158741]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:00:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:00:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:00:21.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:00:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:21 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:22 np0005540825 python3.9[158819]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:00:22 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:22 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00aa50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:00:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:00:22.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:00:23 np0005540825 python3.9[158973]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:00:23 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v276: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:00:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:23 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c00b850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:23 np0005540825 python3.9[159127]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:00:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:00:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:00:23.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:00:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:23 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194002f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:00:24 np0005540825 python3.9[159205]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:00:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:00:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:00:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:24 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:00:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:00:24.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:00:25 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v277: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:00:25 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/100025 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 05:00:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:00:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:00:25.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:00:25 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:25 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00aa50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:25 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:25 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c00b850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:26 np0005540825 python3.9[159357]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:00:26 np0005540825 python3.9[159437]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:00:26 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:26 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194002f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:00:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:00:26.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:00:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:00:27.031Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:00:27 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v278: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  1 05:00:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:27 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194001cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:27 np0005540825 python3.9[159591]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 05:00:27 np0005540825 systemd[1]: Reloading.
Dec  1 05:00:27 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 05:00:27 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 05:00:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:00:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:00:27.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:00:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:27 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00aa50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:28 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c00b850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:00:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:00:28.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:00:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:00:28.871Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:00:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:00:28.872Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:00:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:00:29 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v279: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:00:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:29 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:29 np0005540825 python3.9[159782]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:00:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:00:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:00:29.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:00:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:29 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194001cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:30 np0005540825 python3.9[159860]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:00:30 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:30 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00aa50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:00:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:00:30.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:00:30 np0005540825 python3.9[160012]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:00:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:00:31] "GET /metrics HTTP/1.1" 200 48413 "" "Prometheus/2.51.0"
Dec  1 05:00:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:00:31] "GET /metrics HTTP/1.1" 200 48413 "" "Prometheus/2.51.0"
Dec  1 05:00:31 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v280: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  1 05:00:31 np0005540825 python3.9[160091]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:00:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:31 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c00b850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:00:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:00:31.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:00:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:31 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c00b850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:32 np0005540825 python3.9[160244]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 05:00:32 np0005540825 systemd[1]: Reloading.
Dec  1 05:00:32 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 05:00:32 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 05:00:32 np0005540825 systemd[1]: Starting Create netns directory...
Dec  1 05:00:32 np0005540825 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  1 05:00:32 np0005540825 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  1 05:00:32 np0005540825 systemd[1]: Finished Create netns directory.
Dec  1 05:00:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:32 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194001cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:00:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:00:32.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:00:33 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v281: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  1 05:00:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:33 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00aa50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:33 np0005540825 python3.9[160438]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:00:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:00:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:00:33.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:00:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:33 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:34 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:00:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:34 : epoch 692d661d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:00:34 np0005540825 python3.9[160590]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:00:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:34 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c00b850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:00:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:00:34.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:00:35 np0005540825 python3.9[160713]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764583233.877677-959-195547724479844/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:00:35 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v282: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  1 05:00:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:35 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194001cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:00:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:00:35.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:00:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:35 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00aa50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:36 np0005540825 python3.9[160867]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:00:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:36 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:00:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:00:36.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:00:36 np0005540825 python3.9[161019]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:00:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:00:37.033Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:00:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:00:37.033Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:00:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:37 : epoch 692d661d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:00:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:37 : epoch 692d661d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:00:37 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v283: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Dec  1 05:00:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:37 : epoch 692d661d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:00:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:37 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c00b850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:37 np0005540825 python3.9[161144]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764583236.3579261-1034-168066132540750/.source.json _original_basename=.w9t7ck7h follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:00:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:00:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:00:37.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:00:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:37 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194001cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:38 np0005540825 python3.9[161296]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:00:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:38 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00aa50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:00:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:00:38.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:00:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:00:38.873Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:00:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v284: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_10:00:39
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['images', 'vms', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', '.nfs', '.rgw.root', 'default.rgw.log', 'default.rgw.control']
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 05:00:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:39 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00aa50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:00:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:00:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:00:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:00:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:00:39.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:00:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:39 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c00b850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:40 np0005540825 python3.9[161750]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Dec  1 05:00:40 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:40 : epoch 692d661d : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  1 05:00:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:00:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:00:40.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:00:40 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:40 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194001e60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:00:41] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Dec  1 05:00:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:00:41] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Dec  1 05:00:41 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v285: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 938 B/s wr, 3 op/s
Dec  1 05:00:41 np0005540825 python3.9[161903]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  1 05:00:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:41 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00aa50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:00:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:00:41.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:00:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:41 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:42 np0005540825 python3.9[162056]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  1 05:00:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:00:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:00:42.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:00:42 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:42 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c00b850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:43 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v286: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Dec  1 05:00:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:43 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c00b850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:00:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:00:43.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:00:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:43 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00aa50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:44 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:00:44 np0005540825 python3[162236]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  1 05:00:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:00:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:00:44.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:00:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:44 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:45 np0005540825 podman[162263]: 2025-12-01 10:00:45.258279626 +0000 UTC m=+0.115346592 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125)
Dec  1 05:00:45 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v287: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Dec  1 05:00:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:45 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194001e60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:00:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:00:45.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:00:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:45 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c00b850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:00:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:00:46.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:00:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:46 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00aa50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:00:47.033Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:00:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:00:47.034Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:00:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:00:47.034Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:00:47 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v288: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  1 05:00:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/100047 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  1 05:00:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:47 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:00:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:00:47.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:00:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:47 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194004590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:00:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:00:48.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:00:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:48 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c00b850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:00:48.874Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:00:49 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:00:49 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v289: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 426 B/s wr, 2 op/s
Dec  1 05:00:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:49 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00aa50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:00:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:00:49.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:00:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:49 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:00:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:00:50.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:00:50 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:50 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194004590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:00:51] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Dec  1 05:00:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:00:51] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Dec  1 05:00:51 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v290: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 426 B/s wr, 2 op/s
Dec  1 05:00:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:51 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c00b850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:00:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:00:51.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:00:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:51 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11cc00aa50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:00:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:00:52.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:00:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:52 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:53 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v291: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 85 B/s wr, 0 op/s
Dec  1 05:00:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:53 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1194004590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:00:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:00:53.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:00:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:53 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c00b850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:00:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:00:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:00:54.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:00:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:54 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:00:55 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:00:55 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v292: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 85 B/s wr, 0 op/s
Dec  1 05:00:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:55 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:55 np0005540825 podman[162249]: 2025-12-01 10:00:55.595916183 +0000 UTC m=+11.068159574 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 05:00:55 np0005540825 podman[162425]: 2025-12-01 10:00:55.818062483 +0000 UTC m=+0.068199506 container create 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  1 05:00:55 np0005540825 podman[162425]: 2025-12-01 10:00:55.78911217 +0000 UTC m=+0.039249193 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 05:00:55 np0005540825 python3[162236]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 05:00:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:00:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:00:55.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:00:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:55 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:00:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:00:56.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:00:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:56 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c00b850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:00:57.036Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:00:57 np0005540825 radosgw[94538]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Dec  1 05:00:57 np0005540825 radosgw[94538]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Dec  1 05:00:57 np0005540825 radosgw[94538]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Dec  1 05:00:57 np0005540825 radosgw[94538]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Dec  1 05:00:57 np0005540825 radosgw[94538]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Dec  1 05:00:57 np0005540825 radosgw[94538]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Dec  1 05:00:57 np0005540825 radosgw[94538]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Dec  1 05:00:57 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v293: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec  1 05:00:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:57 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:00:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:00:57.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:00:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:57 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:58 np0005540825 python3.9[162618]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 05:00:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:00:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:00:58.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:00:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:58 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:00:58.874Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:00:59 np0005540825 python3.9[162797]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:00:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:00:59 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v294: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:00:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:59 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c00b850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:00:59 np0005540825 python3.9[162875]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 05:00:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:00:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:00:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:00:59.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:00:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:00:59 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:00 np0005540825 python3.9[163026]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764583259.6257155-1298-217978603598438/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:01:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:01:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:01:00.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:01:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:01:00 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:00 np0005540825 python3.9[163102]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 05:01:00 np0005540825 systemd[1]: Reloading.
Dec  1 05:01:01 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 05:01:01 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 05:01:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:01:01] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Dec  1 05:01:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:01:01] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Dec  1 05:01:01 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v295: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 0 B/s wr, 142 op/s
Dec  1 05:01:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:01:01 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c4003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:01 np0005540825 python3.9[163214]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 05:01:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:01:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:01:01.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:01:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:01:01 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c00b850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:02 np0005540825 systemd[1]: Reloading.
Dec  1 05:01:02 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 05:01:02 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 05:01:02 np0005540825 systemd[1]: Starting ovn_metadata_agent container...
Dec  1 05:01:02 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:01:02 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b66a3a6a5ff9bd643aa32b3b4f5870ef52ee6763b6b7b760c9a42b8f356d8b79/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec  1 05:01:02 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b66a3a6a5ff9bd643aa32b3b4f5870ef52ee6763b6b7b760c9a42b8f356d8b79/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  1 05:01:02 np0005540825 systemd[1]: Started /usr/bin/podman healthcheck run 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae.
Dec  1 05:01:02 np0005540825 podman[163270]: 2025-12-01 10:01:02.542038568 +0000 UTC m=+0.161786378 container init 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec  1 05:01:02 np0005540825 ovn_metadata_agent[163286]: + sudo -E kolla_set_configs
Dec  1 05:01:02 np0005540825 podman[163270]: 2025-12-01 10:01:02.581586478 +0000 UTC m=+0.201334258 container start 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true)
Dec  1 05:01:02 np0005540825 edpm-start-podman-container[163270]: ovn_metadata_agent
Dec  1 05:01:02 np0005540825 podman[163293]: 2025-12-01 10:01:02.685109279 +0000 UTC m=+0.083473839 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:01:02 np0005540825 edpm-start-podman-container[163269]: Creating additional drop-in dependency for "ovn_metadata_agent" (77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae)
Dec  1 05:01:02 np0005540825 ovn_metadata_agent[163286]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  1 05:01:02 np0005540825 ovn_metadata_agent[163286]: INFO:__main__:Validating config file
Dec  1 05:01:02 np0005540825 ovn_metadata_agent[163286]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  1 05:01:02 np0005540825 ovn_metadata_agent[163286]: INFO:__main__:Copying service configuration files
Dec  1 05:01:02 np0005540825 ovn_metadata_agent[163286]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec  1 05:01:02 np0005540825 ovn_metadata_agent[163286]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec  1 05:01:02 np0005540825 ovn_metadata_agent[163286]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec  1 05:01:02 np0005540825 ovn_metadata_agent[163286]: INFO:__main__:Writing out command to execute
Dec  1 05:01:02 np0005540825 ovn_metadata_agent[163286]: INFO:__main__:Setting permission for /var/lib/neutron
Dec  1 05:01:02 np0005540825 ovn_metadata_agent[163286]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec  1 05:01:02 np0005540825 ovn_metadata_agent[163286]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec  1 05:01:02 np0005540825 ovn_metadata_agent[163286]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec  1 05:01:02 np0005540825 ovn_metadata_agent[163286]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec  1 05:01:02 np0005540825 ovn_metadata_agent[163286]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec  1 05:01:02 np0005540825 ovn_metadata_agent[163286]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec  1 05:01:02 np0005540825 ovn_metadata_agent[163286]: ++ cat /run_command
Dec  1 05:01:02 np0005540825 ovn_metadata_agent[163286]: + CMD=neutron-ovn-metadata-agent
Dec  1 05:01:02 np0005540825 ovn_metadata_agent[163286]: + ARGS=
Dec  1 05:01:02 np0005540825 ovn_metadata_agent[163286]: + sudo kolla_copy_cacerts
Dec  1 05:01:02 np0005540825 systemd[1]: Reloading.
Dec  1 05:01:02 np0005540825 ovn_metadata_agent[163286]: + [[ ! -n '' ]]
Dec  1 05:01:02 np0005540825 ovn_metadata_agent[163286]: + . kolla_extend_start
Dec  1 05:01:02 np0005540825 ovn_metadata_agent[163286]: Running command: 'neutron-ovn-metadata-agent'
Dec  1 05:01:02 np0005540825 ovn_metadata_agent[163286]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Dec  1 05:01:02 np0005540825 ovn_metadata_agent[163286]: + umask 0022
Dec  1 05:01:02 np0005540825 ovn_metadata_agent[163286]: + exec neutron-ovn-metadata-agent
Dec  1 05:01:02 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 05:01:02 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 05:01:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:01:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:01:02.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:01:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:01:02 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:02 np0005540825 systemd[1]: Started ovn_metadata_agent container.
Dec  1 05:01:03 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v296: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 0 B/s wr, 142 op/s
Dec  1 05:01:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:01:03 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:03 np0005540825 systemd[1]: session-52.scope: Deactivated successfully.
Dec  1 05:01:03 np0005540825 systemd[1]: session-52.scope: Consumed 1min 1.376s CPU time.
Dec  1 05:01:03 np0005540825 systemd-logind[789]: Session 52 logged out. Waiting for processes to exit.
Dec  1 05:01:03 np0005540825 systemd-logind[789]: Removed session 52.
Dec  1 05:01:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:01:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:01:03.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:01:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:01:03 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11c40053e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.517 163291 INFO neutron.common.config [-] Logging enabled!#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.518 163291 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.518 163291 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.518 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.518 163291 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.518 163291 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.519 163291 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.519 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.519 163291 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.519 163291 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.519 163291 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.519 163291 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.519 163291 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.519 163291 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.519 163291 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.520 163291 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.520 163291 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.520 163291 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.520 163291 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.520 163291 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.520 163291 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.520 163291 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.520 163291 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.520 163291 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.520 163291 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.521 163291 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.521 163291 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.521 163291 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.521 163291 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.521 163291 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.521 163291 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.521 163291 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.521 163291 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.521 163291 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.522 163291 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.522 163291 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.522 163291 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.522 163291 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.522 163291 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.522 163291 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.522 163291 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.522 163291 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.522 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.523 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.523 163291 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.523 163291 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.523 163291 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.523 163291 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.523 163291 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.523 163291 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.523 163291 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.523 163291 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.523 163291 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.523 163291 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.524 163291 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.524 163291 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.524 163291 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.524 163291 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.524 163291 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.524 163291 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.524 163291 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.524 163291 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.524 163291 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.525 163291 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.525 163291 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.525 163291 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.525 163291 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.525 163291 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.525 163291 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.525 163291 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.525 163291 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.525 163291 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.526 163291 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.526 163291 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.526 163291 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.526 163291 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.526 163291 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.526 163291 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.526 163291 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.526 163291 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.526 163291 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.526 163291 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.527 163291 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.527 163291 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.527 163291 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.527 163291 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.527 163291 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.527 163291 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.527 163291 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.527 163291 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.527 163291 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.527 163291 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.528 163291 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.528 163291 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.528 163291 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.528 163291 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.528 163291 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.528 163291 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.528 163291 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.528 163291 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.528 163291 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.528 163291 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.529 163291 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.529 163291 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.529 163291 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.529 163291 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.529 163291 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.529 163291 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.529 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.529 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.529 163291 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.530 163291 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.530 163291 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.530 163291 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.530 163291 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.530 163291 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.530 163291 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.530 163291 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.530 163291 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.530 163291 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.531 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.531 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.531 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.531 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.531 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.531 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.531 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.531 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.531 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.532 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.532 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.532 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.532 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.532 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.532 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.532 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.532 163291 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.532 163291 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.533 163291 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.533 163291 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.533 163291 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.533 163291 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.533 163291 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.533 163291 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.533 163291 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.533 163291 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.533 163291 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.533 163291 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.534 163291 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.534 163291 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.534 163291 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.534 163291 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.534 163291 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.534 163291 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.534 163291 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.534 163291 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.534 163291 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.534 163291 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.535 163291 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.535 163291 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.535 163291 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.535 163291 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.535 163291 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.535 163291 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.535 163291 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.535 163291 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.535 163291 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.536 163291 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.536 163291 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.536 163291 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.536 163291 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.536 163291 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.536 163291 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.536 163291 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.536 163291 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.536 163291 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.537 163291 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.537 163291 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.537 163291 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.537 163291 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.537 163291 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.537 163291 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.537 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.537 163291 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.537 163291 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.538 163291 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.538 163291 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.538 163291 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.538 163291 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.538 163291 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.538 163291 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.538 163291 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.538 163291 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.538 163291 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.538 163291 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.539 163291 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.539 163291 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.539 163291 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.539 163291 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.539 163291 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.539 163291 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.539 163291 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.539 163291 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.539 163291 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.539 163291 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.540 163291 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.540 163291 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.540 163291 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.540 163291 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.540 163291 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.540 163291 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.540 163291 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.540 163291 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.540 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.541 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.541 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.541 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.541 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.541 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.541 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.541 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.541 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.541 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.541 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.542 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.542 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.542 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.542 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.542 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.542 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.542 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.542 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.542 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.542 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.543 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.543 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.543 163291 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.543 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.543 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.543 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.543 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.543 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.543 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.544 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.544 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.544 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.544 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.544 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.544 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.544 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.544 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.544 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.545 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.545 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.545 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.545 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.545 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.545 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.545 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.545 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.545 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.546 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.546 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.546 163291 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.546 163291 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.546 163291 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.546 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.546 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.546 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.546 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.547 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.547 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.547 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.547 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.547 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.547 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.547 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.548 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.548 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.548 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.548 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.548 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.548 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.548 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.548 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.548 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.549 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.549 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.549 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.549 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.549 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.549 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.549 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.549 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.549 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.550 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.550 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.550 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.550 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.550 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.550 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.550 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.550 163291 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.550 163291 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.559 163291 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.559 163291 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.559 163291 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.559 163291 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.559 163291 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.572 163291 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 4d9738cf-2abf-48e2-9303-677669784912 (UUID: 4d9738cf-2abf-48e2-9303-677669784912) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.594 163291 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.595 163291 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.595 163291 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.595 163291 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.598 163291 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.603 163291 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.609 163291 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '4d9738cf-2abf-48e2-9303-677669784912'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f3429b436d0>], external_ids={}, name=4d9738cf-2abf-48e2-9303-677669784912, nb_cfg_timestamp=1764583192186, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.610 163291 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f3429b43bb0>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.610 163291 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.611 163291 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.611 163291 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.611 163291 INFO oslo_service.service [-] Starting 1 workers#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.615 163291 DEBUG oslo_service.service [-] Started child 163402 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.619 163291 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpwguvmfuf/privsep.sock']#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.620 163402 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-454291'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.657 163402 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.658 163402 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.658 163402 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.663 163402 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.672 163402 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Dec  1 05:01:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:04.682 163402 INFO eventlet.wsgi.server [-] (163402) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Dec  1 05:01:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:01:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:01:04.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:01:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:01:04 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c00b850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:05 np0005540825 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Dec  1 05:01:05 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:05.327 163291 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec  1 05:01:05 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:05.328 163291 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpwguvmfuf/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec  1 05:01:05 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:05.205 163408 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  1 05:01:05 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:05.214 163408 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  1 05:01:05 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:05.216 163408 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Dec  1 05:01:05 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:05.216 163408 INFO oslo.privsep.daemon [-] privsep daemon running as pid 163408#033[00m
Dec  1 05:01:05 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:05.331 163408 DEBUG oslo.privsep.daemon [-] privsep: reply[ab2ef633-4dc1-47a2-a23a-741c6261cc63]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:01:05 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v297: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 0 B/s wr, 142 op/s
Dec  1 05:01:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:01:05 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:05 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:05.861 163408 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:01:05 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:05.861 163408 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:01:05 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:05.861 163408 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:01:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:01:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:01:05.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:01:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:01:05 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.390 163408 DEBUG oslo.privsep.daemon [-] privsep: reply[68c6f497-d15f-4886-a101-8eccf19f052a]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.393 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=4d9738cf-2abf-48e2-9303-677669784912, column=external_ids, values=({'neutron:ovn-metadata-id': 'b3baad50-d90f-57f3-a7cc-d831c13b9477'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.405 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4d9738cf-2abf-48e2-9303-677669784912, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.413 163291 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.413 163291 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.413 163291 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.414 163291 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.414 163291 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.414 163291 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.414 163291 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.415 163291 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.415 163291 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.415 163291 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.416 163291 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.416 163291 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.416 163291 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.417 163291 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.417 163291 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.418 163291 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.418 163291 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.418 163291 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.419 163291 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.419 163291 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.419 163291 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.419 163291 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.419 163291 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.419 163291 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.420 163291 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.420 163291 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.420 163291 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.420 163291 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.421 163291 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.421 163291 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.421 163291 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.421 163291 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.421 163291 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.421 163291 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.421 163291 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.422 163291 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.422 163291 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.422 163291 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.422 163291 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.422 163291 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.423 163291 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.423 163291 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.423 163291 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.423 163291 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.423 163291 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.423 163291 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.423 163291 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.424 163291 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.424 163291 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.424 163291 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.424 163291 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.424 163291 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.424 163291 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.424 163291 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.425 163291 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.425 163291 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.425 163291 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.425 163291 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.425 163291 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.425 163291 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.425 163291 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.426 163291 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.426 163291 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.426 163291 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.426 163291 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.426 163291 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.426 163291 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.427 163291 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.427 163291 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.427 163291 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.427 163291 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.427 163291 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.427 163291 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.427 163291 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.427 163291 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.428 163291 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.428 163291 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.428 163291 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.428 163291 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.428 163291 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.428 163291 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.429 163291 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.429 163291 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.429 163291 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.429 163291 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.429 163291 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.429 163291 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.429 163291 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.430 163291 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.430 163291 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.430 163291 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.430 163291 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.430 163291 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.430 163291 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.430 163291 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.430 163291 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.431 163291 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.431 163291 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.431 163291 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.431 163291 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.431 163291 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.431 163291 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.432 163291 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.432 163291 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.432 163291 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.432 163291 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.433 163291 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.433 163291 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.433 163291 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.434 163291 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.434 163291 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.434 163291 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.435 163291 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.435 163291 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.435 163291 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.435 163291 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.436 163291 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.436 163291 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.436 163291 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.436 163291 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.437 163291 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.437 163291 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.437 163291 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.438 163291 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.438 163291 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.438 163291 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.438 163291 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.439 163291 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.439 163291 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.439 163291 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.439 163291 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.440 163291 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.440 163291 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.440 163291 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.440 163291 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.441 163291 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.441 163291 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.441 163291 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.442 163291 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.442 163291 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.442 163291 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.442 163291 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.443 163291 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.443 163291 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.443 163291 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.443 163291 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.444 163291 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.444 163291 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.444 163291 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.444 163291 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.445 163291 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.445 163291 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.445 163291 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.445 163291 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.445 163291 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.445 163291 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.446 163291 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.446 163291 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.446 163291 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.446 163291 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.446 163291 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.446 163291 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.447 163291 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.447 163291 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.447 163291 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.447 163291 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.447 163291 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.447 163291 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.448 163291 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.448 163291 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.448 163291 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.448 163291 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.448 163291 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.448 163291 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.449 163291 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.449 163291 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.449 163291 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.449 163291 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.449 163291 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.449 163291 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.449 163291 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.450 163291 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.450 163291 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.450 163291 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.450 163291 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.450 163291 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.450 163291 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.451 163291 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.451 163291 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.451 163291 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.451 163291 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.451 163291 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.451 163291 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.451 163291 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.452 163291 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.452 163291 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.452 163291 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.452 163291 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.452 163291 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.452 163291 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.452 163291 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.453 163291 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.453 163291 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.453 163291 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.453 163291 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.453 163291 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.453 163291 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.453 163291 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.454 163291 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.454 163291 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.454 163291 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.454 163291 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.454 163291 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.454 163291 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.454 163291 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.455 163291 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.455 163291 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.455 163291 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.455 163291 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.455 163291 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.455 163291 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.455 163291 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.455 163291 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.456 163291 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.456 163291 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.456 163291 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.456 163291 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.456 163291 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.456 163291 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.456 163291 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.457 163291 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.457 163291 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.457 163291 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.457 163291 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.457 163291 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.457 163291 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.457 163291 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.458 163291 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.458 163291 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.458 163291 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.458 163291 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.458 163291 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.458 163291 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.458 163291 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.459 163291 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.459 163291 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.459 163291 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.459 163291 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.459 163291 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.459 163291 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.459 163291 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.460 163291 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.460 163291 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.460 163291 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.460 163291 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.460 163291 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.460 163291 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.460 163291 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.461 163291 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.461 163291 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.461 163291 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.461 163291 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.461 163291 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.461 163291 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.461 163291 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.462 163291 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.462 163291 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.462 163291 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.462 163291 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.462 163291 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.462 163291 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.462 163291 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.463 163291 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.463 163291 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.463 163291 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.463 163291 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.463 163291 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.463 163291 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.463 163291 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.464 163291 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.464 163291 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.464 163291 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.464 163291 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.464 163291 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.464 163291 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.464 163291 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.465 163291 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.465 163291 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.465 163291 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.465 163291 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.465 163291 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.465 163291 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.465 163291 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.465 163291 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.466 163291 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.466 163291 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.466 163291 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.466 163291 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.466 163291 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.466 163291 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.467 163291 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.467 163291 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.467 163291 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:01:06 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:01:06.467 163291 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  1 05:01:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:01:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:01:06.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:01:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:01:06 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:01:07.037Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:01:07 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v298: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 0 B/s wr, 142 op/s
Dec  1 05:01:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:01:07 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f119c00b850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:07 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:07 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:01:07 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:01:07.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:01:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:01:07 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:01:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:01:08.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:01:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:01:08 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a8002fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:01:08.875Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:01:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:01:09 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v299: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 0 B/s wr, 142 op/s
Dec  1 05:01:09 np0005540825 systemd-logind[789]: New session 53 of user zuul.
Dec  1 05:01:09 np0005540825 systemd[1]: Started Session 53 of User zuul.
Dec  1 05:01:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:01:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:01:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:01:09 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:01:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:01:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:01:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:01:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:01:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:01:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:01:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:01:09.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:01:10 np0005540825 kernel: ganesha.nfsd[162381]: segfault at 50 ip 00007f127480532e sp 00007f123b7fd210 error 4 in libntirpc.so.5.8[7f12747ea000+2c000] likely on CPU 5 (core 0, socket 5)
Dec  1 05:01:10 np0005540825 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec  1 05:01:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[120982]: 01/12/2025 10:01:10 : epoch 692d661d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11a0003430 fd 48 proxy ignored for local
Dec  1 05:01:10 np0005540825 systemd[1]: Started Process Core Dump (PID 163521/UID 0).
Dec  1 05:01:10 np0005540825 python3.9[163573]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 05:01:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:01:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:01:10.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:01:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:01:11] "GET /metrics HTTP/1.1" 200 48430 "" "Prometheus/2.51.0"
Dec  1 05:01:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:01:11] "GET /metrics HTTP/1.1" 200 48430 "" "Prometheus/2.51.0"
Dec  1 05:01:11 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v300: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 0 B/s wr, 142 op/s
Dec  1 05:01:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:01:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:01:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 05:01:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:01:11 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v301: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 290 B/s rd, 0 op/s
Dec  1 05:01:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 05:01:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:01:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:01:11.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:01:12 np0005540825 python3.9[163800]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 05:01:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:01:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:01:12.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:01:13 np0005540825 systemd-coredump[163522]: Process 120987 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 74:#012#0  0x00007f127480532e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Dec  1 05:01:13 np0005540825 systemd[1]: systemd-coredump@1-163521-0.service: Deactivated successfully.
Dec  1 05:01:13 np0005540825 systemd[1]: systemd-coredump@1-163521-0.service: Consumed 1.368s CPU time.
Dec  1 05:01:13 np0005540825 podman[163964]: 2025-12-01 10:01:13.344695109 +0000 UTC m=+0.049218403 container died 33ed98ad02f00f5f0d532f872f221422a74604fcda0145c21446c63d6c695acc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:01:13 np0005540825 systemd[1]: var-lib-containers-storage-overlay-94cfaf6288f6c4899cc4e5b6e424dd2321ab0aeb7e2b1768c4d87ad70acba807-merged.mount: Deactivated successfully.
Dec  1 05:01:13 np0005540825 podman[163964]: 2025-12-01 10:01:13.40099357 +0000 UTC m=+0.105516824 container remove 33ed98ad02f00f5f0d532f872f221422a74604fcda0145c21446c63d6c695acc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  1 05:01:13 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Main process exited, code=exited, status=139/n/a
Dec  1 05:01:13 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:01:13 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:01:13 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 05:01:13 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:01:13 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 05:01:13 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 05:01:13 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 05:01:13 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:01:13 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:01:13 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:01:13 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Failed with result 'exit-code'.
Dec  1 05:01:13 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Consumed 2.387s CPU time.
Dec  1 05:01:13 np0005540825 python3.9[163995]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 05:01:13 np0005540825 systemd[1]: Reloading.
Dec  1 05:01:13 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 05:01:13 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 05:01:13 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v302: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 290 B/s rd, 0 op/s
Dec  1 05:01:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:01:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:01:14.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:01:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:01:14 np0005540825 podman[164231]: 2025-12-01 10:01:14.434169487 +0000 UTC m=+0.045262772 container create 92b121085c026055df05badad9ff5ec8cb1e1d3cca23d95417630cf19d254ece (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  1 05:01:14 np0005540825 systemd[1]: Started libpod-conmon-92b121085c026055df05badad9ff5ec8cb1e1d3cca23d95417630cf19d254ece.scope.
Dec  1 05:01:14 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:01:14 np0005540825 podman[164231]: 2025-12-01 10:01:14.411240723 +0000 UTC m=+0.022334028 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:01:14 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:01:14 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:01:14 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:01:14 np0005540825 podman[164231]: 2025-12-01 10:01:14.526616563 +0000 UTC m=+0.137709938 container init 92b121085c026055df05badad9ff5ec8cb1e1d3cca23d95417630cf19d254ece (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_williams, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:01:14 np0005540825 podman[164231]: 2025-12-01 10:01:14.538836147 +0000 UTC m=+0.149929422 container start 92b121085c026055df05badad9ff5ec8cb1e1d3cca23d95417630cf19d254ece (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_williams, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:01:14 np0005540825 podman[164231]: 2025-12-01 10:01:14.542709485 +0000 UTC m=+0.153802871 container attach 92b121085c026055df05badad9ff5ec8cb1e1d3cca23d95417630cf19d254ece (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:01:14 np0005540825 objective_williams[164247]: 167 167
Dec  1 05:01:14 np0005540825 systemd[1]: libpod-92b121085c026055df05badad9ff5ec8cb1e1d3cca23d95417630cf19d254ece.scope: Deactivated successfully.
Dec  1 05:01:14 np0005540825 podman[164231]: 2025-12-01 10:01:14.551571684 +0000 UTC m=+0.162665089 container died 92b121085c026055df05badad9ff5ec8cb1e1d3cca23d95417630cf19d254ece (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_williams, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325)
Dec  1 05:01:14 np0005540825 systemd[1]: var-lib-containers-storage-overlay-2d78659187ff78bf1d046a7c06bcb4c689f61307ba817d7c0ac7f47c66477fad-merged.mount: Deactivated successfully.
Dec  1 05:01:14 np0005540825 podman[164231]: 2025-12-01 10:01:14.606131627 +0000 UTC m=+0.217224952 container remove 92b121085c026055df05badad9ff5ec8cb1e1d3cca23d95417630cf19d254ece (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:01:14 np0005540825 systemd[1]: libpod-conmon-92b121085c026055df05badad9ff5ec8cb1e1d3cca23d95417630cf19d254ece.scope: Deactivated successfully.
Dec  1 05:01:14 np0005540825 podman[164345]: 2025-12-01 10:01:14.838957506 +0000 UTC m=+0.060857510 container create 9b11248a1bc68a0bd2651b1bc189faa17094112fe99e18ffbfbaa172a691d13f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  1 05:01:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:01:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:01:14.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:01:14 np0005540825 systemd[1]: Started libpod-conmon-9b11248a1bc68a0bd2651b1bc189faa17094112fe99e18ffbfbaa172a691d13f.scope.
Dec  1 05:01:14 np0005540825 podman[164345]: 2025-12-01 10:01:14.814091117 +0000 UTC m=+0.035991231 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:01:14 np0005540825 python3.9[164339]: ansible-ansible.builtin.service_facts Invoked
Dec  1 05:01:14 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:01:14 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4d9ba6695f263c28077331b94820d1d7c1998641a95ab5a1edf0096c7413ca9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:01:14 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4d9ba6695f263c28077331b94820d1d7c1998641a95ab5a1edf0096c7413ca9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:01:14 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4d9ba6695f263c28077331b94820d1d7c1998641a95ab5a1edf0096c7413ca9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:01:14 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4d9ba6695f263c28077331b94820d1d7c1998641a95ab5a1edf0096c7413ca9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:01:14 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4d9ba6695f263c28077331b94820d1d7c1998641a95ab5a1edf0096c7413ca9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:01:14 np0005540825 podman[164345]: 2025-12-01 10:01:14.94202295 +0000 UTC m=+0.163923044 container init 9b11248a1bc68a0bd2651b1bc189faa17094112fe99e18ffbfbaa172a691d13f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:01:14 np0005540825 podman[164345]: 2025-12-01 10:01:14.954670706 +0000 UTC m=+0.176570710 container start 9b11248a1bc68a0bd2651b1bc189faa17094112fe99e18ffbfbaa172a691d13f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_ramanujan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec  1 05:01:14 np0005540825 podman[164345]: 2025-12-01 10:01:14.959154912 +0000 UTC m=+0.181054946 container attach 9b11248a1bc68a0bd2651b1bc189faa17094112fe99e18ffbfbaa172a691d13f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_ramanujan, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:01:14 np0005540825 network[164383]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  1 05:01:14 np0005540825 network[164384]: 'network-scripts' will be removed from distribution in near future.
Dec  1 05:01:14 np0005540825 network[164385]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  1 05:01:15 np0005540825 happy_ramanujan[164362]: --> passed data devices: 0 physical, 1 LVM
Dec  1 05:01:15 np0005540825 happy_ramanujan[164362]: --> All data devices are unavailable
Dec  1 05:01:15 np0005540825 podman[164345]: 2025-12-01 10:01:15.332035614 +0000 UTC m=+0.553935608 container died 9b11248a1bc68a0bd2651b1bc189faa17094112fe99e18ffbfbaa172a691d13f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_ramanujan, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  1 05:01:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/100115 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 05:01:15 np0005540825 systemd[1]: libpod-9b11248a1bc68a0bd2651b1bc189faa17094112fe99e18ffbfbaa172a691d13f.scope: Deactivated successfully.
Dec  1 05:01:15 np0005540825 systemd[1]: var-lib-containers-storage-overlay-f4d9ba6695f263c28077331b94820d1d7c1998641a95ab5a1edf0096c7413ca9-merged.mount: Deactivated successfully.
Dec  1 05:01:15 np0005540825 podman[164345]: 2025-12-01 10:01:15.727134311 +0000 UTC m=+0.949034305 container remove 9b11248a1bc68a0bd2651b1bc189faa17094112fe99e18ffbfbaa172a691d13f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_ramanujan, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:01:15 np0005540825 systemd[1]: libpod-conmon-9b11248a1bc68a0bd2651b1bc189faa17094112fe99e18ffbfbaa172a691d13f.scope: Deactivated successfully.
Dec  1 05:01:15 np0005540825 podman[164418]: 2025-12-01 10:01:15.858427079 +0000 UTC m=+0.126747981 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0)
Dec  1 05:01:15 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v303: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 386 B/s rd, 0 op/s
Dec  1 05:01:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:01:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:01:16.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:01:16 np0005540825 podman[164561]: 2025-12-01 10:01:16.382046645 +0000 UTC m=+0.034173651 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:01:16 np0005540825 podman[164561]: 2025-12-01 10:01:16.573026639 +0000 UTC m=+0.225153595 container create afa6701549f42630baa0a536d716a7605dac4561d2c55521e24044b1b2a10fb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_stonebraker, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:01:16 np0005540825 systemd[1]: Started libpod-conmon-afa6701549f42630baa0a536d716a7605dac4561d2c55521e24044b1b2a10fb5.scope.
Dec  1 05:01:16 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:01:16 np0005540825 podman[164561]: 2025-12-01 10:01:16.682661458 +0000 UTC m=+0.334788464 container init afa6701549f42630baa0a536d716a7605dac4561d2c55521e24044b1b2a10fb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_stonebraker, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:01:16 np0005540825 podman[164561]: 2025-12-01 10:01:16.693113622 +0000 UTC m=+0.345240588 container start afa6701549f42630baa0a536d716a7605dac4561d2c55521e24044b1b2a10fb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_stonebraker, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  1 05:01:16 np0005540825 podman[164561]: 2025-12-01 10:01:16.697574297 +0000 UTC m=+0.349701263 container attach afa6701549f42630baa0a536d716a7605dac4561d2c55521e24044b1b2a10fb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_stonebraker, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:01:16 np0005540825 kind_stonebraker[164589]: 167 167
Dec  1 05:01:16 np0005540825 systemd[1]: libpod-afa6701549f42630baa0a536d716a7605dac4561d2c55521e24044b1b2a10fb5.scope: Deactivated successfully.
Dec  1 05:01:16 np0005540825 podman[164561]: 2025-12-01 10:01:16.704604934 +0000 UTC m=+0.356731900 container died afa6701549f42630baa0a536d716a7605dac4561d2c55521e24044b1b2a10fb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_stonebraker, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  1 05:01:16 np0005540825 systemd[1]: var-lib-containers-storage-overlay-e37b291f1c7999bccd6d797148d0ad8efc4ec06e668a1b5f194dd55c1d88c2c5-merged.mount: Deactivated successfully.
Dec  1 05:01:16 np0005540825 podman[164561]: 2025-12-01 10:01:16.748686502 +0000 UTC m=+0.400813428 container remove afa6701549f42630baa0a536d716a7605dac4561d2c55521e24044b1b2a10fb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_stonebraker, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:01:16 np0005540825 systemd[1]: libpod-conmon-afa6701549f42630baa0a536d716a7605dac4561d2c55521e24044b1b2a10fb5.scope: Deactivated successfully.
Dec  1 05:01:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:01:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:01:16.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:01:17 np0005540825 podman[164623]: 2025-12-01 10:01:17.002730647 +0000 UTC m=+0.059021288 container create 4b887d8f627567fdd94a706d567f50a0e0738afece75188adea199dc3d4e018f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_driscoll, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:01:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:01:17.038Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:01:17 np0005540825 systemd[1]: Started libpod-conmon-4b887d8f627567fdd94a706d567f50a0e0738afece75188adea199dc3d4e018f.scope.
Dec  1 05:01:17 np0005540825 podman[164623]: 2025-12-01 10:01:16.969453913 +0000 UTC m=+0.025744604 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:01:17 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:01:17 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4ec654a191af8bc30d00f421fb33071ee7a684425db045019a77016f58dca02/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:01:17 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4ec654a191af8bc30d00f421fb33071ee7a684425db045019a77016f58dca02/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:01:17 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4ec654a191af8bc30d00f421fb33071ee7a684425db045019a77016f58dca02/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:01:17 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4ec654a191af8bc30d00f421fb33071ee7a684425db045019a77016f58dca02/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:01:17 np0005540825 podman[164623]: 2025-12-01 10:01:17.134866349 +0000 UTC m=+0.191157030 container init 4b887d8f627567fdd94a706d567f50a0e0738afece75188adea199dc3d4e018f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:01:17 np0005540825 podman[164623]: 2025-12-01 10:01:17.146203727 +0000 UTC m=+0.202494378 container start 4b887d8f627567fdd94a706d567f50a0e0738afece75188adea199dc3d4e018f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:01:17 np0005540825 podman[164623]: 2025-12-01 10:01:17.150376894 +0000 UTC m=+0.206667595 container attach 4b887d8f627567fdd94a706d567f50a0e0738afece75188adea199dc3d4e018f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:01:17 np0005540825 quizzical_driscoll[164643]: {
Dec  1 05:01:17 np0005540825 quizzical_driscoll[164643]:    "1": [
Dec  1 05:01:17 np0005540825 quizzical_driscoll[164643]:        {
Dec  1 05:01:17 np0005540825 quizzical_driscoll[164643]:            "devices": [
Dec  1 05:01:17 np0005540825 quizzical_driscoll[164643]:                "/dev/loop3"
Dec  1 05:01:17 np0005540825 quizzical_driscoll[164643]:            ],
Dec  1 05:01:17 np0005540825 quizzical_driscoll[164643]:            "lv_name": "ceph_lv0",
Dec  1 05:01:17 np0005540825 quizzical_driscoll[164643]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:01:17 np0005540825 quizzical_driscoll[164643]:            "lv_size": "21470642176",
Dec  1 05:01:17 np0005540825 quizzical_driscoll[164643]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 05:01:17 np0005540825 quizzical_driscoll[164643]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:01:17 np0005540825 quizzical_driscoll[164643]:            "name": "ceph_lv0",
Dec  1 05:01:17 np0005540825 quizzical_driscoll[164643]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:01:17 np0005540825 quizzical_driscoll[164643]:            "tags": {
Dec  1 05:01:17 np0005540825 quizzical_driscoll[164643]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:01:17 np0005540825 quizzical_driscoll[164643]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:01:17 np0005540825 quizzical_driscoll[164643]:                "ceph.cephx_lockbox_secret": "",
Dec  1 05:01:17 np0005540825 quizzical_driscoll[164643]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 05:01:17 np0005540825 quizzical_driscoll[164643]:                "ceph.cluster_name": "ceph",
Dec  1 05:01:17 np0005540825 quizzical_driscoll[164643]:                "ceph.crush_device_class": "",
Dec  1 05:01:17 np0005540825 quizzical_driscoll[164643]:                "ceph.encrypted": "0",
Dec  1 05:01:17 np0005540825 quizzical_driscoll[164643]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 05:01:17 np0005540825 quizzical_driscoll[164643]:                "ceph.osd_id": "1",
Dec  1 05:01:17 np0005540825 quizzical_driscoll[164643]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 05:01:17 np0005540825 quizzical_driscoll[164643]:                "ceph.type": "block",
Dec  1 05:01:17 np0005540825 quizzical_driscoll[164643]:                "ceph.vdo": "0",
Dec  1 05:01:17 np0005540825 quizzical_driscoll[164643]:                "ceph.with_tpm": "0"
Dec  1 05:01:17 np0005540825 quizzical_driscoll[164643]:            },
Dec  1 05:01:17 np0005540825 quizzical_driscoll[164643]:            "type": "block",
Dec  1 05:01:17 np0005540825 quizzical_driscoll[164643]:            "vg_name": "ceph_vg0"
Dec  1 05:01:17 np0005540825 quizzical_driscoll[164643]:        }
Dec  1 05:01:17 np0005540825 quizzical_driscoll[164643]:    ]
Dec  1 05:01:17 np0005540825 quizzical_driscoll[164643]: }
Dec  1 05:01:17 np0005540825 systemd[1]: libpod-4b887d8f627567fdd94a706d567f50a0e0738afece75188adea199dc3d4e018f.scope: Deactivated successfully.
Dec  1 05:01:17 np0005540825 podman[164623]: 2025-12-01 10:01:17.51055873 +0000 UTC m=+0.566849371 container died 4b887d8f627567fdd94a706d567f50a0e0738afece75188adea199dc3d4e018f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec  1 05:01:17 np0005540825 systemd[1]: var-lib-containers-storage-overlay-f4ec654a191af8bc30d00f421fb33071ee7a684425db045019a77016f58dca02-merged.mount: Deactivated successfully.
Dec  1 05:01:17 np0005540825 podman[164623]: 2025-12-01 10:01:17.576451731 +0000 UTC m=+0.632742362 container remove 4b887d8f627567fdd94a706d567f50a0e0738afece75188adea199dc3d4e018f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_driscoll, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  1 05:01:17 np0005540825 systemd[1]: libpod-conmon-4b887d8f627567fdd94a706d567f50a0e0738afece75188adea199dc3d4e018f.scope: Deactivated successfully.
Dec  1 05:01:17 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v304: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 96 B/s rd, 0 op/s
Dec  1 05:01:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/100118 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 05:01:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:01:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:01:18.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:01:18 np0005540825 podman[164803]: 2025-12-01 10:01:18.282276064 +0000 UTC m=+0.058084473 container create 61752dde7201ec5722824e6be864be7961637d9ac21112da1e4bb874a85732b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_carver, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  1 05:01:18 np0005540825 systemd[1]: Started libpod-conmon-61752dde7201ec5722824e6be864be7961637d9ac21112da1e4bb874a85732b4.scope.
Dec  1 05:01:18 np0005540825 podman[164803]: 2025-12-01 10:01:18.250768169 +0000 UTC m=+0.026576628 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:01:18 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:01:18 np0005540825 podman[164803]: 2025-12-01 10:01:18.389379772 +0000 UTC m=+0.165188231 container init 61752dde7201ec5722824e6be864be7961637d9ac21112da1e4bb874a85732b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec  1 05:01:18 np0005540825 podman[164803]: 2025-12-01 10:01:18.403539409 +0000 UTC m=+0.179347788 container start 61752dde7201ec5722824e6be864be7961637d9ac21112da1e4bb874a85732b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec  1 05:01:18 np0005540825 podman[164803]: 2025-12-01 10:01:18.409370483 +0000 UTC m=+0.185178892 container attach 61752dde7201ec5722824e6be864be7961637d9ac21112da1e4bb874a85732b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_carver, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  1 05:01:18 np0005540825 frosty_carver[164823]: 167 167
Dec  1 05:01:18 np0005540825 systemd[1]: libpod-61752dde7201ec5722824e6be864be7961637d9ac21112da1e4bb874a85732b4.scope: Deactivated successfully.
Dec  1 05:01:18 np0005540825 podman[164803]: 2025-12-01 10:01:18.413032406 +0000 UTC m=+0.188840785 container died 61752dde7201ec5722824e6be864be7961637d9ac21112da1e4bb874a85732b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_carver, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec  1 05:01:18 np0005540825 systemd[1]: var-lib-containers-storage-overlay-3cb491be2bccd2cd3fd7f17612ebd50cfaeb3fbf1672be7d71dd92eeb5075cde-merged.mount: Deactivated successfully.
Dec  1 05:01:18 np0005540825 podman[164803]: 2025-12-01 10:01:18.467182917 +0000 UTC m=+0.242991306 container remove 61752dde7201ec5722824e6be864be7961637d9ac21112da1e4bb874a85732b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_carver, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:01:18 np0005540825 systemd[1]: libpod-conmon-61752dde7201ec5722824e6be864be7961637d9ac21112da1e4bb874a85732b4.scope: Deactivated successfully.
Dec  1 05:01:18 np0005540825 podman[164898]: 2025-12-01 10:01:18.688128602 +0000 UTC m=+0.073096634 container create 706e8680ba9e64906bf8602e2183ba291ddaf4c26c191c4691f577dcc3d4b773 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_fermi, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:01:18 np0005540825 systemd[1]: Started libpod-conmon-706e8680ba9e64906bf8602e2183ba291ddaf4c26c191c4691f577dcc3d4b773.scope.
Dec  1 05:01:18 np0005540825 podman[164898]: 2025-12-01 10:01:18.65848554 +0000 UTC m=+0.043453682 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:01:18 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:01:18 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3d8657de389cb36ef080cc8c3c5fc1db40bca8c31881c0cc095676f72cbeada/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:01:18 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3d8657de389cb36ef080cc8c3c5fc1db40bca8c31881c0cc095676f72cbeada/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:01:18 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3d8657de389cb36ef080cc8c3c5fc1db40bca8c31881c0cc095676f72cbeada/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:01:18 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3d8657de389cb36ef080cc8c3c5fc1db40bca8c31881c0cc095676f72cbeada/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:01:18 np0005540825 podman[164898]: 2025-12-01 10:01:18.794908101 +0000 UTC m=+0.179876153 container init 706e8680ba9e64906bf8602e2183ba291ddaf4c26c191c4691f577dcc3d4b773 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:01:18 np0005540825 podman[164898]: 2025-12-01 10:01:18.810626033 +0000 UTC m=+0.195594095 container start 706e8680ba9e64906bf8602e2183ba291ddaf4c26c191c4691f577dcc3d4b773 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:01:18 np0005540825 podman[164898]: 2025-12-01 10:01:18.814400709 +0000 UTC m=+0.199368821 container attach 706e8680ba9e64906bf8602e2183ba291ddaf4c26c191c4691f577dcc3d4b773 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:01:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:01:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:01:18.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:01:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:01:18.876Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:01:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:01:18.879Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:01:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:01:19 np0005540825 lvm[164991]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 05:01:19 np0005540825 lvm[164991]: VG ceph_vg0 finished
Dec  1 05:01:19 np0005540825 adoring_fermi[164914]: {}
Dec  1 05:01:19 np0005540825 systemd[1]: libpod-706e8680ba9e64906bf8602e2183ba291ddaf4c26c191c4691f577dcc3d4b773.scope: Deactivated successfully.
Dec  1 05:01:19 np0005540825 systemd[1]: libpod-706e8680ba9e64906bf8602e2183ba291ddaf4c26c191c4691f577dcc3d4b773.scope: Consumed 1.621s CPU time.
Dec  1 05:01:19 np0005540825 podman[164898]: 2025-12-01 10:01:19.730800787 +0000 UTC m=+1.115768809 container died 706e8680ba9e64906bf8602e2183ba291ddaf4c26c191c4691f577dcc3d4b773 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_fermi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  1 05:01:19 np0005540825 systemd[1]: var-lib-containers-storage-overlay-a3d8657de389cb36ef080cc8c3c5fc1db40bca8c31881c0cc095676f72cbeada-merged.mount: Deactivated successfully.
Dec  1 05:01:19 np0005540825 podman[164898]: 2025-12-01 10:01:19.774513204 +0000 UTC m=+1.159481226 container remove 706e8680ba9e64906bf8602e2183ba291ddaf4c26c191c4691f577dcc3d4b773 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_fermi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:01:19 np0005540825 systemd[1]: libpod-conmon-706e8680ba9e64906bf8602e2183ba291ddaf4c26c191c4691f577dcc3d4b773.scope: Deactivated successfully.
Dec  1 05:01:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 05:01:19 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:01:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 05:01:19 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v305: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 96 B/s rd, 0 op/s
Dec  1 05:01:20 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:01:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:01:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:01:20.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:01:20 np0005540825 python3.9[165158]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 05:01:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:01:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:01:20.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:01:20 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:01:20 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:01:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:01:21] "GET /metrics HTTP/1.1" 200 48430 "" "Prometheus/2.51.0"
Dec  1 05:01:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:01:21] "GET /metrics HTTP/1.1" 200 48430 "" "Prometheus/2.51.0"
Dec  1 05:01:21 np0005540825 python3.9[165312]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 05:01:21 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v306: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 483 B/s rd, 96 B/s wr, 0 op/s
Dec  1 05:01:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:01:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:01:22.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:01:22 np0005540825 python3.9[165466]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 05:01:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:01:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:01:22.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:01:23 np0005540825 python3.9[165619]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 05:01:23 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Scheduled restart job, restart counter is at 2.
Dec  1 05:01:23 np0005540825 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.pytvsu for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 05:01:23 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Consumed 2.387s CPU time.
Dec  1 05:01:23 np0005540825 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.pytvsu for 365f19c2-81e5-5edd-b6b4-280555214d3a...
Dec  1 05:01:23 np0005540825 podman[165820]: 2025-12-01 10:01:23.897958964 +0000 UTC m=+0.038993606 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:01:23 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v307: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec  1 05:01:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:01:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:01:24.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:01:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:01:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:01:24 np0005540825 podman[165820]: 2025-12-01 10:01:24.572605532 +0000 UTC m=+0.713640114 container create 712a4c1e4a3d4112359d36679955d704512aa624ff7c4e557acb04aadf264297 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  1 05:01:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:01:24 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c15a33afbd212731f526e149f8e80099a20ea62e5282ee95fe97e02819597547/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec  1 05:01:24 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c15a33afbd212731f526e149f8e80099a20ea62e5282ee95fe97e02819597547/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:01:24 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c15a33afbd212731f526e149f8e80099a20ea62e5282ee95fe97e02819597547/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:01:24 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c15a33afbd212731f526e149f8e80099a20ea62e5282ee95fe97e02819597547/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.pytvsu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:01:24 np0005540825 podman[165820]: 2025-12-01 10:01:24.652522336 +0000 UTC m=+0.793556978 container init 712a4c1e4a3d4112359d36679955d704512aa624ff7c4e557acb04aadf264297 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  1 05:01:24 np0005540825 podman[165820]: 2025-12-01 10:01:24.658470233 +0000 UTC m=+0.799504815 container start 712a4c1e4a3d4112359d36679955d704512aa624ff7c4e557acb04aadf264297 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:01:24 np0005540825 bash[165820]: 712a4c1e4a3d4112359d36679955d704512aa624ff7c4e557acb04aadf264297
Dec  1 05:01:24 np0005540825 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.pytvsu for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 05:01:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:24 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec  1 05:01:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:24 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec  1 05:01:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:24 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec  1 05:01:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:24 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec  1 05:01:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:24 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec  1 05:01:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:24 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec  1 05:01:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:24 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec  1 05:01:24 np0005540825 python3.9[165807]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 05:01:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:01:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:01:24.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:01:25 np0005540825 python3.9[166033]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 05:01:25 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:25 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:01:26 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v308: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Dec  1 05:01:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:01:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:01:26.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:01:26 np0005540825 python3.9[166186]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 05:01:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:01:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:01:26.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:01:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:01:27.041Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:01:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:01:27.042Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:01:27 np0005540825 python3.9[166341]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:01:28 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v309: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Dec  1 05:01:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:01:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:01:28.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:01:28 np0005540825 python3.9[166493]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:01:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  1 05:01:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:01:28.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  1 05:01:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:01:28.880Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:01:29 np0005540825 python3.9[166645]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:01:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:01:29 np0005540825 python3.9[166799]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:01:30 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v310: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Dec  1 05:01:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:01:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:01:30.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:01:30 np0005540825 python3.9[166951]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:01:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:01:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:01:30.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:01:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:01:31] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec  1 05:01:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:01:31] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec  1 05:01:31 np0005540825 python3.9[167104]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:01:32 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v311: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  1 05:01:32 np0005540825 python3.9[167257]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:01:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:32 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Dec  1 05:01:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:32 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Dec  1 05:01:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:32 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:01:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:32 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:01:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:32 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:01:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:01:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:01:32.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:01:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:01:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:01:32.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:01:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:32 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:01:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:32 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:01:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:32 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:01:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:32 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:01:32 np0005540825 python3.9[167409]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:01:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:32 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:01:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:32 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:01:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:32 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:01:33 np0005540825 podman[167481]: 2025-12-01 10:01:33.234095714 +0000 UTC m=+0.091028088 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 05:01:33 np0005540825 python3.9[167583]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:01:34 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v312: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Dec  1 05:01:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:01:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:01:34.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:01:34 np0005540825 python3.9[167735]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:01:34 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:01:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:01:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:01:34.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:01:35 np0005540825 python3.9[167887]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:01:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/100135 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  1 05:01:35 np0005540825 python3.9[168041]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:01:36 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v313: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1.4 KiB/s wr, 4 op/s
Dec  1 05:01:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:01:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:01:36.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:01:36 np0005540825 python3.9[168193]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:01:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:01:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:01:36.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:01:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:36 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000010:nfs.cephfs.2: -2
Dec  1 05:01:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:36 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  1 05:01:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:36 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec  1 05:01:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:36 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec  1 05:01:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:36 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec  1 05:01:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:36 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec  1 05:01:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:36 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec  1 05:01:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:36 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec  1 05:01:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:36 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  1 05:01:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:36 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  1 05:01:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:36 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  1 05:01:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:36 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec  1 05:01:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:36 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  1 05:01:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:36 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec  1 05:01:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:36 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec  1 05:01:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:36 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec  1 05:01:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:36 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec  1 05:01:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:36 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec  1 05:01:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:36 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec  1 05:01:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:36 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec  1 05:01:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:36 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec  1 05:01:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:36 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec  1 05:01:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:36 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec  1 05:01:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:36 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec  1 05:01:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:36 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec  1 05:01:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:36 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  1 05:01:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:36 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec  1 05:01:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:36 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  1 05:01:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:01:37.043Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:01:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:01:37.043Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:01:37 np0005540825 python3.9[168360]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:01:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:37 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd590000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:38 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v314: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Dec  1 05:01:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:38 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd580001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:01:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:01:38.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:01:38 np0005540825 python3.9[168515]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 05:01:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:01:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:01:38.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:01:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:01:38.881Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:01:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:38 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd560000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:39 np0005540825 python3.9[168693]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_10:01:39
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['.rgw.root', 'backups', '.mgr', 'cephfs.cephfs.meta', 'vms', 'volumes', '.nfs', 'default.rgw.meta', 'default.rgw.log', 'images', 'default.rgw.control', 'cephfs.cephfs.data']
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 05:01:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:01:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:01:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:39 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 05:01:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:01:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:01:40 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v315: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Dec  1 05:01:40 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/100140 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  1 05:01:40 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:40 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd590001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:01:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:01:40.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:01:40 np0005540825 python3.9[168846]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 05:01:40 np0005540825 systemd[1]: Reloading.
Dec  1 05:01:40 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 05:01:40 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 05:01:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000029s ======
Dec  1 05:01:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:01:40.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  1 05:01:40 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:40 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd580002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:01:41] "GET /metrics HTTP/1.1" 200 48434 "" "Prometheus/2.51.0"
Dec  1 05:01:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:01:41] "GET /metrics HTTP/1.1" 200 48434 "" "Prometheus/2.51.0"
Dec  1 05:01:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:41 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5600016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:41 np0005540825 python3.9[169034]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 05:01:42 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v316: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Dec  1 05:01:42 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:42 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5600016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:01:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:01:42.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:01:42 np0005540825 python3.9[169187]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 05:01:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:01:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:01:42.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:01:42 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:42 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd590001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:43 np0005540825 python3.9[169340]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 05:01:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:43 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd580002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:43 np0005540825 python3.9[169495]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 05:01:44 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v317: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Dec  1 05:01:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:44 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5600016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:01:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:01:44.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:01:44 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:01:44 np0005540825 python3.9[169648]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 05:01:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:01:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:01:44.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:01:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:44 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5600016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:45 np0005540825 python3.9[169802]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 05:01:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:45 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5900089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:45 np0005540825 python3.9[169956]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 05:01:46 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v318: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Dec  1 05:01:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:46 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd580002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:46 np0005540825 podman[169958]: 2025-12-01 10:01:46.176397567 +0000 UTC m=+0.174404459 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  1 05:01:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:01:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:01:46.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:01:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:01:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:01:46.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:01:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:46 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5600016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:01:47.045Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:01:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:47 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c0019e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:47 np0005540825 python3.9[170137]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Dec  1 05:01:48 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v319: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Dec  1 05:01:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:48 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5900089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:01:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:01:48.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:01:48 np0005540825 python3.9[170290]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  1 05:01:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:01:48.882Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:01:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:01:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:01:48.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:01:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:48 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd580002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:49 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5600016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:49 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:01:50 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v320: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Dec  1 05:01:50 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:50 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c0019e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:01:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:01:50.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:01:50 np0005540825 python3.9[170450]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  1 05:01:50 np0005540825 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 05:01:50 np0005540825 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 05:01:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:01:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:01:50.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:01:50 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:50 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5900096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:01:51] "GET /metrics HTTP/1.1" 200 48434 "" "Prometheus/2.51.0"
Dec  1 05:01:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:01:51] "GET /metrics HTTP/1.1" 200 48434 "" "Prometheus/2.51.0"
Dec  1 05:01:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/100151 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 05:01:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:51 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd580002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:52 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v321: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 255 B/s wr, 1 op/s
Dec  1 05:01:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:52 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5600036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:01:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:01:52.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:01:52 np0005540825 python3.9[170613]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 05:01:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:01:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:01:52.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:01:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:52 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:53 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5900096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:53 np0005540825 python3.9[170699]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 05:01:54 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v322: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  1 05:01:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:54 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd580002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:01:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:01:54.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:01:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:01:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:01:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:01:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:01:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:01:54.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:01:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:54 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5600036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:55 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:56 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v323: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  1 05:01:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:56 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5900096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:01:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:01:56.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:01:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:01:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:01:56.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:01:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:56 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd580002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:01:57.045Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:01:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:57 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5600036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:58 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v324: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  1 05:01:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:58 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5600036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:01:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:01:58.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:01:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:01:58.883Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:01:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:01:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:01:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:01:58.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:01:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:58 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:01:59 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd59000a3f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:01:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:02:00 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v325: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  1 05:02:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:00 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd580002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:02:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:02:00.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:02:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:00 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:02:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:02:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:02:00.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:02:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:00 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5600036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:02:01] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec  1 05:02:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:02:01] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec  1 05:02:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:01 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:02 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v326: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 511 B/s wr, 1 op/s
Dec  1 05:02:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:02 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd59000a3f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:02:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:02:02.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:02:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:02:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:02:02.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:02:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:02 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd580002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:03 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:02:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:03 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:02:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:03 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:02:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:03 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd580002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:04 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v327: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Dec  1 05:02:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:04 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:04 np0005540825 podman[170821]: 2025-12-01 10:02:04.226072743 +0000 UTC m=+0.075362307 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 05:02:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:02:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:02:04.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:02:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:02:04.552 163291 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:02:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:02:04.553 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:02:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:02:04.554 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:02:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:02:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:02:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:02:04.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:02:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:04 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd59000a3f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:05 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd59000a3f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:06 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v328: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Dec  1 05:02:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:06 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd59000a3f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:02:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:02:06.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:02:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:06 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  1 05:02:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:02:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:02:06.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:02:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:06 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd59000a3f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:02:07.048Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:02:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:07 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5600036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:08 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v329: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Dec  1 05:02:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:08 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5600036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:02:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:02:08.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:02:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:02:08.885Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:02:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:02:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:02:08.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:02:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:08 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c0013a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:02:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:02:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:09 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:02:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:02:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:02:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:02:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:02:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:02:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:02:10 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v330: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Dec  1 05:02:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:10 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd554000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:02:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:02:10.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:02:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:02:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:02:10.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:02:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:10 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd580002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:02:11] "GET /metrics HTTP/1.1" 200 48437 "" "Prometheus/2.51.0"
Dec  1 05:02:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:02:11] "GET /metrics HTTP/1.1" 200 48437 "" "Prometheus/2.51.0"
Dec  1 05:02:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/100211 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  1 05:02:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:11 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c002090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:12 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v331: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  1 05:02:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:12 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:02:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:02:12.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:02:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:02:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:02:12.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:02:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:12 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5540016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:13 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd580002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:14 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v332: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 511 B/s wr, 2 op/s
Dec  1 05:02:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:14 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c002090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:02:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:02:14.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:02:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:02:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:02:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:02:14.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:02:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:14 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:15 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5540016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:16 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v333: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 511 B/s wr, 2 op/s
Dec  1 05:02:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:16 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd580002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:02:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:02:16.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:02:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:02:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:02:16.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:02:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:16 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c002da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:02:17.049Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:02:17 np0005540825 podman[170961]: 2025-12-01 10:02:17.296135938 +0000 UTC m=+0.153538282 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 05:02:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:17 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:18 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v334: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec  1 05:02:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:18 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5540016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:02:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:02:18.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:02:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:02:18.886Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:02:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:02:18.886Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:02:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:02:18.887Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:02:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:02:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:02:18.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:02:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:18 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd580002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:02:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:19 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c002da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:20 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v335: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec  1 05:02:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:20 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:02:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:02:20.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:02:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:02:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:02:20.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:02:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:20 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd554002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:21 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:02:21 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:02:21 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 05:02:21 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:02:21 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 05:02:21 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v336: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 552 B/s rd, 92 B/s wr, 0 op/s
Dec  1 05:02:21 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:02:21 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 05:02:21 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:02:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:02:21] "GET /metrics HTTP/1.1" 200 48437 "" "Prometheus/2.51.0"
Dec  1 05:02:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:02:21] "GET /metrics HTTP/1.1" 200 48437 "" "Prometheus/2.51.0"
Dec  1 05:02:21 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:02:21 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 05:02:21 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 05:02:21 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 05:02:21 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:02:21 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:02:21 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:02:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:21 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd580002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:22 np0005540825 podman[171191]: 2025-12-01 10:02:21.934899789 +0000 UTC m=+0.023195847 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:02:22 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:22 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c003ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:22 np0005540825 podman[171191]: 2025-12-01 10:02:22.061640995 +0000 UTC m=+0.149937053 container create 2ea7f21362a243ba679496d992891c1dc6d42604fdab4d13d3e18271fa509751 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  1 05:02:22 np0005540825 systemd[1]: Started libpod-conmon-2ea7f21362a243ba679496d992891c1dc6d42604fdab4d13d3e18271fa509751.scope.
Dec  1 05:02:22 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:02:22 np0005540825 podman[171191]: 2025-12-01 10:02:22.183572068 +0000 UTC m=+0.271868096 container init 2ea7f21362a243ba679496d992891c1dc6d42604fdab4d13d3e18271fa509751 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_herschel, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:02:22 np0005540825 podman[171191]: 2025-12-01 10:02:22.191333171 +0000 UTC m=+0.279629189 container start 2ea7f21362a243ba679496d992891c1dc6d42604fdab4d13d3e18271fa509751 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_herschel, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  1 05:02:22 np0005540825 podman[171191]: 2025-12-01 10:02:22.196994066 +0000 UTC m=+0.285290094 container attach 2ea7f21362a243ba679496d992891c1dc6d42604fdab4d13d3e18271fa509751 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_herschel, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:02:22 np0005540825 laughing_herschel[171208]: 167 167
Dec  1 05:02:22 np0005540825 systemd[1]: libpod-2ea7f21362a243ba679496d992891c1dc6d42604fdab4d13d3e18271fa509751.scope: Deactivated successfully.
Dec  1 05:02:22 np0005540825 podman[171191]: 2025-12-01 10:02:22.198750485 +0000 UTC m=+0.287046503 container died 2ea7f21362a243ba679496d992891c1dc6d42604fdab4d13d3e18271fa509751 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_herschel, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  1 05:02:22 np0005540825 systemd[1]: var-lib-containers-storage-overlay-143f07f6f6f19a6cacd4a99937f2c17549e3599938a67175ab4f558e2938394f-merged.mount: Deactivated successfully.
Dec  1 05:02:22 np0005540825 podman[171191]: 2025-12-01 10:02:22.241956669 +0000 UTC m=+0.330252687 container remove 2ea7f21362a243ba679496d992891c1dc6d42604fdab4d13d3e18271fa509751 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:02:22 np0005540825 systemd[1]: libpod-conmon-2ea7f21362a243ba679496d992891c1dc6d42604fdab4d13d3e18271fa509751.scope: Deactivated successfully.
Dec  1 05:02:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:02:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:02:22.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:02:22 np0005540825 kernel: SELinux:  Converting 2775 SID table entries...
Dec  1 05:02:22 np0005540825 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 05:02:22 np0005540825 kernel: SELinux:  policy capability open_perms=1
Dec  1 05:02:22 np0005540825 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 05:02:22 np0005540825 kernel: SELinux:  policy capability always_check_network=0
Dec  1 05:02:22 np0005540825 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 05:02:22 np0005540825 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 05:02:22 np0005540825 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 05:02:22 np0005540825 podman[171232]: 2025-12-01 10:02:22.434098788 +0000 UTC m=+0.053061056 container create f0782640d3cf7aa58e3a8065c09f53c7e168fd50271dcc7c677981fc472a0ed5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_nobel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:02:22 np0005540825 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Dec  1 05:02:22 np0005540825 systemd[1]: Started libpod-conmon-f0782640d3cf7aa58e3a8065c09f53c7e168fd50271dcc7c677981fc472a0ed5.scope.
Dec  1 05:02:22 np0005540825 podman[171232]: 2025-12-01 10:02:22.408334371 +0000 UTC m=+0.027296629 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:02:22 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:02:22 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/555d16b6d2a0327e1742a02deb5d0ac25d4a041ffee96e1cfce75c2cc84196e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:02:22 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/555d16b6d2a0327e1742a02deb5d0ac25d4a041ffee96e1cfce75c2cc84196e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:02:22 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/555d16b6d2a0327e1742a02deb5d0ac25d4a041ffee96e1cfce75c2cc84196e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:02:22 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/555d16b6d2a0327e1742a02deb5d0ac25d4a041ffee96e1cfce75c2cc84196e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:02:22 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/555d16b6d2a0327e1742a02deb5d0ac25d4a041ffee96e1cfce75c2cc84196e5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:02:22 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:02:22 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:02:22 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:02:22 np0005540825 podman[171232]: 2025-12-01 10:02:22.587568026 +0000 UTC m=+0.206530354 container init f0782640d3cf7aa58e3a8065c09f53c7e168fd50271dcc7c677981fc472a0ed5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_nobel, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:02:22 np0005540825 podman[171232]: 2025-12-01 10:02:22.594648071 +0000 UTC m=+0.213610319 container start f0782640d3cf7aa58e3a8065c09f53c7e168fd50271dcc7c677981fc472a0ed5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:02:22 np0005540825 podman[171232]: 2025-12-01 10:02:22.600068659 +0000 UTC m=+0.219030927 container attach f0782640d3cf7aa58e3a8065c09f53c7e168fd50271dcc7c677981fc472a0ed5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_nobel, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Dec  1 05:02:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:02:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:02:22.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:02:22 np0005540825 pedantic_nobel[171249]: --> passed data devices: 0 physical, 1 LVM
Dec  1 05:02:22 np0005540825 pedantic_nobel[171249]: --> All data devices are unavailable
Dec  1 05:02:22 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:22 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:22 np0005540825 systemd[1]: libpod-f0782640d3cf7aa58e3a8065c09f53c7e168fd50271dcc7c677981fc472a0ed5.scope: Deactivated successfully.
Dec  1 05:02:23 np0005540825 podman[171264]: 2025-12-01 10:02:23.019993235 +0000 UTC m=+0.030904368 container died f0782640d3cf7aa58e3a8065c09f53c7e168fd50271dcc7c677981fc472a0ed5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_nobel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:02:23 np0005540825 systemd[1]: var-lib-containers-storage-overlay-555d16b6d2a0327e1742a02deb5d0ac25d4a041ffee96e1cfce75c2cc84196e5-merged.mount: Deactivated successfully.
Dec  1 05:02:23 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v337: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 276 B/s rd, 0 op/s
Dec  1 05:02:23 np0005540825 podman[171264]: 2025-12-01 10:02:23.195073696 +0000 UTC m=+0.205984769 container remove f0782640d3cf7aa58e3a8065c09f53c7e168fd50271dcc7c677981fc472a0ed5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_nobel, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  1 05:02:23 np0005540825 systemd[1]: libpod-conmon-f0782640d3cf7aa58e3a8065c09f53c7e168fd50271dcc7c677981fc472a0ed5.scope: Deactivated successfully.
Dec  1 05:02:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:23 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd554002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:24 np0005540825 podman[171373]: 2025-12-01 10:02:24.015969589 +0000 UTC m=+0.069931919 container create 38c55d3cee8836573a30b43f31cbe92b42c02e34c48ef8f5b12c6bbda1a0d565 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_williams, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Dec  1 05:02:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:24 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd580003f60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:24 np0005540825 systemd[1]: Started libpod-conmon-38c55d3cee8836573a30b43f31cbe92b42c02e34c48ef8f5b12c6bbda1a0d565.scope.
Dec  1 05:02:24 np0005540825 podman[171373]: 2025-12-01 10:02:23.984388102 +0000 UTC m=+0.038350272 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:02:24 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:02:24 np0005540825 podman[171373]: 2025-12-01 10:02:24.143918227 +0000 UTC m=+0.197880347 container init 38c55d3cee8836573a30b43f31cbe92b42c02e34c48ef8f5b12c6bbda1a0d565 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_williams, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:02:24 np0005540825 podman[171373]: 2025-12-01 10:02:24.159147925 +0000 UTC m=+0.213110025 container start 38c55d3cee8836573a30b43f31cbe92b42c02e34c48ef8f5b12c6bbda1a0d565 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_williams, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:02:24 np0005540825 podman[171373]: 2025-12-01 10:02:24.16334174 +0000 UTC m=+0.217303880 container attach 38c55d3cee8836573a30b43f31cbe92b42c02e34c48ef8f5b12c6bbda1a0d565 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_williams, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec  1 05:02:24 np0005540825 lucid_williams[171389]: 167 167
Dec  1 05:02:24 np0005540825 systemd[1]: libpod-38c55d3cee8836573a30b43f31cbe92b42c02e34c48ef8f5b12c6bbda1a0d565.scope: Deactivated successfully.
Dec  1 05:02:24 np0005540825 podman[171373]: 2025-12-01 10:02:24.164876732 +0000 UTC m=+0.218838842 container died 38c55d3cee8836573a30b43f31cbe92b42c02e34c48ef8f5b12c6bbda1a0d565 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  1 05:02:24 np0005540825 systemd[1]: var-lib-containers-storage-overlay-8ac45c1be18264d639b36c1d8c84a3b529d2a4651ce0210fb66d4f193fb6207d-merged.mount: Deactivated successfully.
Dec  1 05:02:24 np0005540825 podman[171373]: 2025-12-01 10:02:24.214438231 +0000 UTC m=+0.268400331 container remove 38c55d3cee8836573a30b43f31cbe92b42c02e34c48ef8f5b12c6bbda1a0d565 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:02:24 np0005540825 systemd[1]: libpod-conmon-38c55d3cee8836573a30b43f31cbe92b42c02e34c48ef8f5b12c6bbda1a0d565.scope: Deactivated successfully.
Dec  1 05:02:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:02:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:02:24.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:02:24 np0005540825 podman[171414]: 2025-12-01 10:02:24.426860867 +0000 UTC m=+0.078186936 container create 5664cef8d78100cf7288586fc366c96ef29e8afce8d05ea8bbaa51c722290578 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_hoover, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:02:24 np0005540825 systemd[1]: Started libpod-conmon-5664cef8d78100cf7288586fc366c96ef29e8afce8d05ea8bbaa51c722290578.scope.
Dec  1 05:02:24 np0005540825 podman[171414]: 2025-12-01 10:02:24.386405547 +0000 UTC m=+0.037731696 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:02:24 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:02:24 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0f4dc1b80bb7a16192fc18a96d6cf96bcc3177f34dd8a1c78f33b3bfcd249c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:02:24 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0f4dc1b80bb7a16192fc18a96d6cf96bcc3177f34dd8a1c78f33b3bfcd249c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:02:24 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0f4dc1b80bb7a16192fc18a96d6cf96bcc3177f34dd8a1c78f33b3bfcd249c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:02:24 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0f4dc1b80bb7a16192fc18a96d6cf96bcc3177f34dd8a1c78f33b3bfcd249c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:02:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:02:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:02:24 np0005540825 podman[171414]: 2025-12-01 10:02:24.534350444 +0000 UTC m=+0.185676523 container init 5664cef8d78100cf7288586fc366c96ef29e8afce8d05ea8bbaa51c722290578 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_hoover, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  1 05:02:24 np0005540825 podman[171414]: 2025-12-01 10:02:24.540683818 +0000 UTC m=+0.192009877 container start 5664cef8d78100cf7288586fc366c96ef29e8afce8d05ea8bbaa51c722290578 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_hoover, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:02:24 np0005540825 podman[171414]: 2025-12-01 10:02:24.560026169 +0000 UTC m=+0.211352248 container attach 5664cef8d78100cf7288586fc366c96ef29e8afce8d05ea8bbaa51c722290578 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_hoover, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid)
Dec  1 05:02:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:02:24 np0005540825 busy_hoover[171430]: {
Dec  1 05:02:24 np0005540825 busy_hoover[171430]:    "1": [
Dec  1 05:02:24 np0005540825 busy_hoover[171430]:        {
Dec  1 05:02:24 np0005540825 busy_hoover[171430]:            "devices": [
Dec  1 05:02:24 np0005540825 busy_hoover[171430]:                "/dev/loop3"
Dec  1 05:02:24 np0005540825 busy_hoover[171430]:            ],
Dec  1 05:02:24 np0005540825 busy_hoover[171430]:            "lv_name": "ceph_lv0",
Dec  1 05:02:24 np0005540825 busy_hoover[171430]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:02:24 np0005540825 busy_hoover[171430]:            "lv_size": "21470642176",
Dec  1 05:02:24 np0005540825 busy_hoover[171430]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 05:02:24 np0005540825 busy_hoover[171430]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:02:24 np0005540825 busy_hoover[171430]:            "name": "ceph_lv0",
Dec  1 05:02:24 np0005540825 busy_hoover[171430]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:02:24 np0005540825 busy_hoover[171430]:            "tags": {
Dec  1 05:02:24 np0005540825 busy_hoover[171430]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:02:24 np0005540825 busy_hoover[171430]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:02:24 np0005540825 busy_hoover[171430]:                "ceph.cephx_lockbox_secret": "",
Dec  1 05:02:24 np0005540825 busy_hoover[171430]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 05:02:24 np0005540825 busy_hoover[171430]:                "ceph.cluster_name": "ceph",
Dec  1 05:02:24 np0005540825 busy_hoover[171430]:                "ceph.crush_device_class": "",
Dec  1 05:02:24 np0005540825 busy_hoover[171430]:                "ceph.encrypted": "0",
Dec  1 05:02:24 np0005540825 busy_hoover[171430]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 05:02:24 np0005540825 busy_hoover[171430]:                "ceph.osd_id": "1",
Dec  1 05:02:24 np0005540825 busy_hoover[171430]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 05:02:24 np0005540825 busy_hoover[171430]:                "ceph.type": "block",
Dec  1 05:02:24 np0005540825 busy_hoover[171430]:                "ceph.vdo": "0",
Dec  1 05:02:24 np0005540825 busy_hoover[171430]:                "ceph.with_tpm": "0"
Dec  1 05:02:24 np0005540825 busy_hoover[171430]:            },
Dec  1 05:02:24 np0005540825 busy_hoover[171430]:            "type": "block",
Dec  1 05:02:24 np0005540825 busy_hoover[171430]:            "vg_name": "ceph_vg0"
Dec  1 05:02:24 np0005540825 busy_hoover[171430]:        }
Dec  1 05:02:24 np0005540825 busy_hoover[171430]:    ]
Dec  1 05:02:24 np0005540825 busy_hoover[171430]: }
Dec  1 05:02:24 np0005540825 systemd[1]: libpod-5664cef8d78100cf7288586fc366c96ef29e8afce8d05ea8bbaa51c722290578.scope: Deactivated successfully.
Dec  1 05:02:24 np0005540825 podman[171439]: 2025-12-01 10:02:24.881630938 +0000 UTC m=+0.044222604 container died 5664cef8d78100cf7288586fc366c96ef29e8afce8d05ea8bbaa51c722290578 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_hoover, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  1 05:02:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:02:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:02:24.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:02:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:24 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c003ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:24 np0005540825 systemd[1]: var-lib-containers-storage-overlay-b0f4dc1b80bb7a16192fc18a96d6cf96bcc3177f34dd8a1c78f33b3bfcd249c7-merged.mount: Deactivated successfully.
Dec  1 05:02:25 np0005540825 podman[171439]: 2025-12-01 10:02:25.044194576 +0000 UTC m=+0.206786212 container remove 5664cef8d78100cf7288586fc366c96ef29e8afce8d05ea8bbaa51c722290578 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  1 05:02:25 np0005540825 systemd[1]: libpod-conmon-5664cef8d78100cf7288586fc366c96ef29e8afce8d05ea8bbaa51c722290578.scope: Deactivated successfully.
Dec  1 05:02:25 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v338: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 276 B/s rd, 0 op/s
Dec  1 05:02:25 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:25 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:25 np0005540825 podman[171543]: 2025-12-01 10:02:25.71199381 +0000 UTC m=+0.070473234 container create 28bd4fa5968efdcf9d2c472bdca6abe72a098a9598200855c097ef0c07f12980 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:02:25 np0005540825 podman[171543]: 2025-12-01 10:02:25.680460035 +0000 UTC m=+0.038939469 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:02:25 np0005540825 systemd[1]: Started libpod-conmon-28bd4fa5968efdcf9d2c472bdca6abe72a098a9598200855c097ef0c07f12980.scope.
Dec  1 05:02:25 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:02:25 np0005540825 podman[171543]: 2025-12-01 10:02:25.890923217 +0000 UTC m=+0.249402721 container init 28bd4fa5968efdcf9d2c472bdca6abe72a098a9598200855c097ef0c07f12980 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  1 05:02:25 np0005540825 podman[171543]: 2025-12-01 10:02:25.897859537 +0000 UTC m=+0.256338991 container start 28bd4fa5968efdcf9d2c472bdca6abe72a098a9598200855c097ef0c07f12980 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:02:25 np0005540825 pedantic_gould[171559]: 167 167
Dec  1 05:02:25 np0005540825 systemd[1]: libpod-28bd4fa5968efdcf9d2c472bdca6abe72a098a9598200855c097ef0c07f12980.scope: Deactivated successfully.
Dec  1 05:02:25 np0005540825 podman[171543]: 2025-12-01 10:02:25.951180388 +0000 UTC m=+0.309659912 container attach 28bd4fa5968efdcf9d2c472bdca6abe72a098a9598200855c097ef0c07f12980 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  1 05:02:25 np0005540825 podman[171543]: 2025-12-01 10:02:25.95416187 +0000 UTC m=+0.312641324 container died 28bd4fa5968efdcf9d2c472bdca6abe72a098a9598200855c097ef0c07f12980 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_gould, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  1 05:02:26 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:26 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd554002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:26 np0005540825 systemd[1]: var-lib-containers-storage-overlay-f52cd0fe832e5edbbe40f4b7376543f4aca6217f8792720c92f9871d3bc41ade-merged.mount: Deactivated successfully.
Dec  1 05:02:26 np0005540825 podman[171543]: 2025-12-01 10:02:26.112686317 +0000 UTC m=+0.471165741 container remove 28bd4fa5968efdcf9d2c472bdca6abe72a098a9598200855c097ef0c07f12980 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_gould, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec  1 05:02:26 np0005540825 systemd[1]: libpod-conmon-28bd4fa5968efdcf9d2c472bdca6abe72a098a9598200855c097ef0c07f12980.scope: Deactivated successfully.
Dec  1 05:02:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:02:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:02:26.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:02:26 np0005540825 podman[171586]: 2025-12-01 10:02:26.31187023 +0000 UTC m=+0.051406531 container create 86c7d488569580fd214edaff52faa4f860c67c711b4a344454ad6c5827fc256d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_nobel, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  1 05:02:26 np0005540825 podman[171586]: 2025-12-01 10:02:26.285731293 +0000 UTC m=+0.025267634 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:02:26 np0005540825 systemd[1]: Started libpod-conmon-86c7d488569580fd214edaff52faa4f860c67c711b4a344454ad6c5827fc256d.scope.
Dec  1 05:02:26 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:02:26 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d70406ec4b111ea61865efa8b38d4e1af01b7ededf227e51d3038f3a4de68ab4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:02:26 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d70406ec4b111ea61865efa8b38d4e1af01b7ededf227e51d3038f3a4de68ab4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:02:26 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d70406ec4b111ea61865efa8b38d4e1af01b7ededf227e51d3038f3a4de68ab4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:02:26 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d70406ec4b111ea61865efa8b38d4e1af01b7ededf227e51d3038f3a4de68ab4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:02:26 np0005540825 podman[171586]: 2025-12-01 10:02:26.505380706 +0000 UTC m=+0.244917017 container init 86c7d488569580fd214edaff52faa4f860c67c711b4a344454ad6c5827fc256d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_nobel, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:02:26 np0005540825 podman[171586]: 2025-12-01 10:02:26.516163692 +0000 UTC m=+0.255699993 container start 86c7d488569580fd214edaff52faa4f860c67c711b4a344454ad6c5827fc256d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:02:26 np0005540825 podman[171586]: 2025-12-01 10:02:26.542122904 +0000 UTC m=+0.281659175 container attach 86c7d488569580fd214edaff52faa4f860c67c711b4a344454ad6c5827fc256d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_nobel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  1 05:02:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:02:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:02:26.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:02:26 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:26 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd580003f60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:02:27.051Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:02:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:02:27.052Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:02:27 np0005540825 lvm[171677]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 05:02:27 np0005540825 lvm[171677]: VG ceph_vg0 finished
Dec  1 05:02:27 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v339: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 276 B/s rd, 0 op/s
Dec  1 05:02:27 np0005540825 hungry_nobel[171602]: {}
Dec  1 05:02:27 np0005540825 systemd[1]: libpod-86c7d488569580fd214edaff52faa4f860c67c711b4a344454ad6c5827fc256d.scope: Deactivated successfully.
Dec  1 05:02:27 np0005540825 systemd[1]: libpod-86c7d488569580fd214edaff52faa4f860c67c711b4a344454ad6c5827fc256d.scope: Consumed 1.077s CPU time.
Dec  1 05:02:27 np0005540825 podman[171586]: 2025-12-01 10:02:27.177835118 +0000 UTC m=+0.917371379 container died 86c7d488569580fd214edaff52faa4f860c67c711b4a344454ad6c5827fc256d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_nobel, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:02:27 np0005540825 systemd[1]: var-lib-containers-storage-overlay-d70406ec4b111ea61865efa8b38d4e1af01b7ededf227e51d3038f3a4de68ab4-merged.mount: Deactivated successfully.
Dec  1 05:02:27 np0005540825 podman[171586]: 2025-12-01 10:02:27.274758196 +0000 UTC m=+1.014294457 container remove 86c7d488569580fd214edaff52faa4f860c67c711b4a344454ad6c5827fc256d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_nobel, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Dec  1 05:02:27 np0005540825 systemd[1]: libpod-conmon-86c7d488569580fd214edaff52faa4f860c67c711b4a344454ad6c5827fc256d.scope: Deactivated successfully.
Dec  1 05:02:27 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 05:02:27 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:02:27 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 05:02:27 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:02:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:27 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c003ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:28 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:02:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:02:28.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:02:28 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:02:28 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:02:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:02:28.888Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:02:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:02:28.889Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:02:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:02:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:02:28.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:02:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:28 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd554003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:29 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v340: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 276 B/s rd, 0 op/s
Dec  1 05:02:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:02:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:29 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd580003f60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:30 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:30 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c003ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:02:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:02:30.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:02:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:02:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:02:30.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:02:30 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:30 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:31 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v341: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 552 B/s rd, 0 op/s
Dec  1 05:02:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:02:31] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Dec  1 05:02:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:02:31] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Dec  1 05:02:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:31 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd554003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:32 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd580003f60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:32 np0005540825 kernel: SELinux:  Converting 2775 SID table entries...
Dec  1 05:02:32 np0005540825 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 05:02:32 np0005540825 kernel: SELinux:  policy capability open_perms=1
Dec  1 05:02:32 np0005540825 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 05:02:32 np0005540825 kernel: SELinux:  policy capability always_check_network=0
Dec  1 05:02:32 np0005540825 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 05:02:32 np0005540825 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 05:02:32 np0005540825 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 05:02:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:02:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:02:32.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:02:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:02:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:02:32.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:02:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:32 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c003ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:33 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v342: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:02:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:33 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:34 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd554003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:02:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:02:34.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:02:34 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:02:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:02:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:02:34.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:02:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:34 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd580003f60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:35 np0005540825 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Dec  1 05:02:35 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v343: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:02:35 np0005540825 podman[171735]: 2025-12-01 10:02:35.233251605 +0000 UTC m=+0.084295243 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 05:02:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:35 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c003ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:36 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:02:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:02:36.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:02:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:02:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:02:36.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:02:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:36 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd554003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:02:37.053Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:02:37 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v344: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:02:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:37 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd580003f60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:38 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:02:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:02:38.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:02:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:02:38.891Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:02:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:02:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:02:38.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:02:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:38 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c003ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v345: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_10:02:39
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['.mgr', 'volumes', 'vms', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', '.nfs', '.rgw.root', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta']
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 05:02:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:02:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:02:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:02:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:39 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c003ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:02:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:02:40 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:40 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c003ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:02:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:02:40.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:02:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:02:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:02:40.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:02:40 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:40 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:41 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v346: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  1 05:02:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:02:41] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec  1 05:02:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:02:41] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec  1 05:02:41 np0005540825 ceph-mgr[74709]: [devicehealth INFO root] Check health
Dec  1 05:02:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:41 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:42 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:42 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd560001070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:02:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:02:42.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:02:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:02:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:02:42.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:02:42 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:42 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c003ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:43 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v347: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  1 05:02:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/100243 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 05:02:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:43 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c003ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:44 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd584000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:02:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:02:44.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:02:44 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:02:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:02:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:02:44.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:02:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:44 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd590001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:45 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v348: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  1 05:02:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:45 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:46 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c003ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:02:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:02:46.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:02:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:02:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:02:46.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:02:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:46 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd584001930 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:02:47.055Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:02:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:02:47.056Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:02:47 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v349: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  1 05:02:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:47 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd590001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:48 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:48 np0005540825 podman[173950]: 2025-12-01 10:02:48.247533071 +0000 UTC m=+0.107683284 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  1 05:02:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:02:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:02:48.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:02:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:02:48.893Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:02:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:02:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:02:48.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:02:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:48 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c003ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:49 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v350: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  1 05:02:49 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:02:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:49 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd584001930 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:50 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:50 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd590001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:02:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:02:50.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:02:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:02:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:02:50.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:02:50 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:50 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:51 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v351: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Dec  1 05:02:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:02:51] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec  1 05:02:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:02:51] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec  1 05:02:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:51 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c003ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:52 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd584001930 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:52 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:02:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:02:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:02:52.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:02:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:02:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:02:52.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:02:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:52 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5900092a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:53 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v352: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec  1 05:02:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:53 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:54 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c003ab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:02:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:02:54.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:02:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:02:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:02:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:02:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:02:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:02:54.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:02:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:54 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd584002da0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:55 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v353: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec  1 05:02:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:55 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:02:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:55 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:02:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:55 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5900092a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:56 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:02:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:02:56.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:02:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:02:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:02:56.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:02:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:56 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c003ab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:02:57.057Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:02:57 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v354: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 853 B/s wr, 2 op/s
Dec  1 05:02:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:57 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd584002da0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:58 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5900092a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:58 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  1 05:02:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:02:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:02:58.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:02:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:02:58.894Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:02:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:02:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:02:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:02:58.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:02:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:58 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:02:59 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v355: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 852 B/s wr, 2 op/s
Dec  1 05:02:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:02:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:02:59 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c003ab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:00 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd584002da0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:03:00.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:03:00.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:00 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd584002da0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/100301 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 05:03:01 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v356: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec  1 05:03:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:03:01] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Dec  1 05:03:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:03:01] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Dec  1 05:03:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:01 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5900092a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:02 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd584002da0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:03:02.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:03:02.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:02 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:03 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v357: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 852 B/s wr, 2 op/s
Dec  1 05:03:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/100303 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  1 05:03:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:03 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5900092a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:04 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd560001070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:03:04.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:03:04.553 163291 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:03:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:03:04.554 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:03:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:03:04.554 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:03:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:03:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:03:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:03:04.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:03:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:04 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd560001070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:05 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v358: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 852 B/s wr, 2 op/s
Dec  1 05:03:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:05 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:06 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5900092a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:06 np0005540825 podman[183628]: 2025-12-01 10:03:06.22956554 +0000 UTC m=+0.089373892 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Dec  1 05:03:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:03:06.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:03:06.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:06 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd560001070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:03:07.058Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:03:07 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v359: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 852 B/s wr, 2 op/s
Dec  1 05:03:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:07 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd584003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:08 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:03:08.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:03:08.895Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:03:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:03:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:03:08.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:03:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:08 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5900092a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:09 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:03:09 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v360: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec  1 05:03:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=cleanup t=2025-12-01T10:03:09.229677503Z level=info msg="Completed cleanup jobs" duration=22.091826ms
Dec  1 05:03:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=plugins.update.checker t=2025-12-01T10:03:09.320042051Z level=info msg="Update check succeeded" duration=48.464799ms
Dec  1 05:03:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=grafana.update.checker t=2025-12-01T10:03:09.324995667Z level=info msg="Update check succeeded" duration=46.387703ms
Dec  1 05:03:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:03:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:03:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:03:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:03:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:03:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:03:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:03:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:03:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:03:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:09 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5900092a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:10 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd584003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:03:10.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:03:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:03:10.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:03:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:11 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:11 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v361: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 2 op/s
Dec  1 05:03:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:03:11] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Dec  1 05:03:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:03:11] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Dec  1 05:03:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:11 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:12 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:12 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:03:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:12 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:03:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:12 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:03:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:03:12.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:03:12.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:13 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd584003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:13 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v362: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec  1 05:03:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:13 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5900092a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:14 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd560003260 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:03:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:03:14.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:03:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:03:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:03:14.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:15 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:15 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v363: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec  1 05:03:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:15 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  1 05:03:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:15 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd584003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:16 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd584003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:03:16.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:03:16.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:17 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd560003260 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:03:17.060Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:03:17 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v364: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  1 05:03:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:17 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:18 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:03:18.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:03:18.897Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:03:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:03:18.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:19 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:19 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v365: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  1 05:03:19 np0005540825 podman[188721]: 2025-12-01 10:03:19.240999448 +0000 UTC m=+0.116340282 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  1 05:03:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:03:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:19 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd560003260 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:20 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd554001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:03:20.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:03:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:03:20.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:03:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:21 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd554001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/100321 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  1 05:03:21 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v366: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  1 05:03:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:03:21] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Dec  1 05:03:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:03:21] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Dec  1 05:03:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:21 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:22 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:22 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd560003260 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:03:22.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:03:22.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:23 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd554001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:23 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v367: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec  1 05:03:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:23 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd554001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:24 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd560003260 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:03:24.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:03:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:03:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:03:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:03:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:03:24.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:03:25 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:25 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd560003260 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:25 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v368: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec  1 05:03:25 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:25 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:26 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:26 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd554001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:03:26.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:03:26.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:27 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd560003260 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:03:27.069Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:03:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:03:27.069Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:03:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:03:27.069Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:03:27 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v369: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Dec  1 05:03:27 np0005540825 kernel: SELinux:  Converting 2776 SID table entries...
Dec  1 05:03:27 np0005540825 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 05:03:27 np0005540825 kernel: SELinux:  policy capability open_perms=1
Dec  1 05:03:27 np0005540825 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 05:03:27 np0005540825 kernel: SELinux:  policy capability always_check_network=0
Dec  1 05:03:27 np0005540825 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 05:03:27 np0005540825 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 05:03:27 np0005540825 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 05:03:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:27 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd560003260 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:27 np0005540825 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Dec  1 05:03:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:28 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c004530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:03:28.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:28 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec  1 05:03:28 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  1 05:03:28 np0005540825 dbus-broker-launch[764]: Noticed file-system modification, trigger reload.
Dec  1 05:03:28 np0005540825 dbus-broker-launch[764]: Noticed file-system modification, trigger reload.
Dec  1 05:03:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:03:28.897Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:03:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:03:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:03:28.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:03:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:29 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c004530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:29 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v370: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec  1 05:03:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:03:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:29 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd560003260 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:29 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  1 05:03:30 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:30 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd584004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:03:30.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  1 05:03:30 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:03:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  1 05:03:30 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:03:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:03:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:03:30.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:03:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:31 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd554001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:31 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v371: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec  1 05:03:31 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  1 05:03:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:03:31] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec  1 05:03:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:03:31] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec  1 05:03:31 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:03:31 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  1 05:03:31 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:03:31 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec  1 05:03:31 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  1 05:03:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:31 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c004530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:31 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:03:31 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:03:31 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:03:31 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:03:31 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  1 05:03:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:32 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c004530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec  1 05:03:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  1 05:03:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:03:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:03:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 05:03:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:03:32 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v372: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 276 B/s rd, 0 op/s
Dec  1 05:03:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 05:03:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:03:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 05:03:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:03:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 05:03:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 05:03:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 05:03:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:03:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:03:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:03:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:03:32.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:32 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  1 05:03:32 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:03:32 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:03:32 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:03:32 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:03:32 np0005540825 podman[189085]: 2025-12-01 10:03:32.937539742 +0000 UTC m=+0.044549703 container create 38d4b0c7a8ac7be0faa21466ef51e728aaf320ffe68e2af4f63657d531c699d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_colden, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:03:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:03:32.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:32 np0005540825 systemd[1]: Started libpod-conmon-38d4b0c7a8ac7be0faa21466ef51e728aaf320ffe68e2af4f63657d531c699d7.scope.
Dec  1 05:03:33 np0005540825 podman[189085]: 2025-12-01 10:03:32.917754145 +0000 UTC m=+0.024764116 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:03:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:33 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:33 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:03:33 np0005540825 podman[189085]: 2025-12-01 10:03:33.049775208 +0000 UTC m=+0.156785169 container init 38d4b0c7a8ac7be0faa21466ef51e728aaf320ffe68e2af4f63657d531c699d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_colden, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:03:33 np0005540825 podman[189085]: 2025-12-01 10:03:33.062706116 +0000 UTC m=+0.169716067 container start 38d4b0c7a8ac7be0faa21466ef51e728aaf320ffe68e2af4f63657d531c699d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_colden, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:03:33 np0005540825 podman[189085]: 2025-12-01 10:03:33.065685519 +0000 UTC m=+0.172695490 container attach 38d4b0c7a8ac7be0faa21466ef51e728aaf320ffe68e2af4f63657d531c699d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_colden, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:03:33 np0005540825 epic_colden[189118]: 167 167
Dec  1 05:03:33 np0005540825 systemd[1]: libpod-38d4b0c7a8ac7be0faa21466ef51e728aaf320ffe68e2af4f63657d531c699d7.scope: Deactivated successfully.
Dec  1 05:03:33 np0005540825 podman[189085]: 2025-12-01 10:03:33.07007243 +0000 UTC m=+0.177082391 container died 38d4b0c7a8ac7be0faa21466ef51e728aaf320ffe68e2af4f63657d531c699d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_colden, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  1 05:03:33 np0005540825 systemd[1]: var-lib-containers-storage-overlay-7db3b52fd31a69b8267c2530661f437595fe5945444c82df5c1d1a0b24b05914-merged.mount: Deactivated successfully.
Dec  1 05:03:33 np0005540825 podman[189085]: 2025-12-01 10:03:33.112997198 +0000 UTC m=+0.220007179 container remove 38d4b0c7a8ac7be0faa21466ef51e728aaf320ffe68e2af4f63657d531c699d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_colden, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Dec  1 05:03:33 np0005540825 systemd[1]: libpod-conmon-38d4b0c7a8ac7be0faa21466ef51e728aaf320ffe68e2af4f63657d531c699d7.scope: Deactivated successfully.
Dec  1 05:03:33 np0005540825 podman[189174]: 2025-12-01 10:03:33.271254927 +0000 UTC m=+0.044047980 container create 9ec6658622b8995c4eb5d84f7f219156da0eaea5bdb87782ac723a745523c6e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_brahmagupta, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  1 05:03:33 np0005540825 systemd[1]: Started libpod-conmon-9ec6658622b8995c4eb5d84f7f219156da0eaea5bdb87782ac723a745523c6e4.scope.
Dec  1 05:03:33 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:03:33 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56e9b7e8c0cc0c1a97574ed5d28f27d18e368a73ca18b9839726fdd0c951c71d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:03:33 np0005540825 podman[189174]: 2025-12-01 10:03:33.253530217 +0000 UTC m=+0.026323270 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:03:33 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56e9b7e8c0cc0c1a97574ed5d28f27d18e368a73ca18b9839726fdd0c951c71d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:03:33 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56e9b7e8c0cc0c1a97574ed5d28f27d18e368a73ca18b9839726fdd0c951c71d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:03:33 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56e9b7e8c0cc0c1a97574ed5d28f27d18e368a73ca18b9839726fdd0c951c71d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:03:33 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56e9b7e8c0cc0c1a97574ed5d28f27d18e368a73ca18b9839726fdd0c951c71d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:03:33 np0005540825 podman[189174]: 2025-12-01 10:03:33.362004849 +0000 UTC m=+0.134797892 container init 9ec6658622b8995c4eb5d84f7f219156da0eaea5bdb87782ac723a745523c6e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:03:33 np0005540825 podman[189174]: 2025-12-01 10:03:33.376685245 +0000 UTC m=+0.149478288 container start 9ec6658622b8995c4eb5d84f7f219156da0eaea5bdb87782ac723a745523c6e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_brahmagupta, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Dec  1 05:03:33 np0005540825 podman[189174]: 2025-12-01 10:03:33.380532061 +0000 UTC m=+0.153325114 container attach 9ec6658622b8995c4eb5d84f7f219156da0eaea5bdb87782ac723a745523c6e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:03:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:33 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd554003a20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:33 np0005540825 friendly_brahmagupta[189201]: --> passed data devices: 0 physical, 1 LVM
Dec  1 05:03:33 np0005540825 friendly_brahmagupta[189201]: --> All data devices are unavailable
Dec  1 05:03:33 np0005540825 systemd[1]: libpod-9ec6658622b8995c4eb5d84f7f219156da0eaea5bdb87782ac723a745523c6e4.scope: Deactivated successfully.
Dec  1 05:03:33 np0005540825 podman[189174]: 2025-12-01 10:03:33.805874482 +0000 UTC m=+0.578667525 container died 9ec6658622b8995c4eb5d84f7f219156da0eaea5bdb87782ac723a745523c6e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_brahmagupta, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:03:33 np0005540825 systemd[1]: var-lib-containers-storage-overlay-56e9b7e8c0cc0c1a97574ed5d28f27d18e368a73ca18b9839726fdd0c951c71d-merged.mount: Deactivated successfully.
Dec  1 05:03:33 np0005540825 podman[189174]: 2025-12-01 10:03:33.86580157 +0000 UTC m=+0.638594653 container remove 9ec6658622b8995c4eb5d84f7f219156da0eaea5bdb87782ac723a745523c6e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  1 05:03:33 np0005540825 systemd[1]: libpod-conmon-9ec6658622b8995c4eb5d84f7f219156da0eaea5bdb87782ac723a745523c6e4.scope: Deactivated successfully.
Dec  1 05:03:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:34 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd560003260 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:34 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v373: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 276 B/s rd, 0 op/s
Dec  1 05:03:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:03:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:03:34.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:03:34 np0005540825 podman[189357]: 2025-12-01 10:03:34.477226059 +0000 UTC m=+0.058648214 container create d186943e7a551d06420dbff7b156bcfe5f7a3e4931b679bc7b2aade3972ffcfc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_kirch, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  1 05:03:34 np0005540825 systemd[1]: Started libpod-conmon-d186943e7a551d06420dbff7b156bcfe5f7a3e4931b679bc7b2aade3972ffcfc.scope.
Dec  1 05:03:34 np0005540825 podman[189357]: 2025-12-01 10:03:34.44761991 +0000 UTC m=+0.029042165 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:03:34 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:03:34 np0005540825 podman[189357]: 2025-12-01 10:03:34.561255524 +0000 UTC m=+0.142677709 container init d186943e7a551d06420dbff7b156bcfe5f7a3e4931b679bc7b2aade3972ffcfc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_kirch, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  1 05:03:34 np0005540825 podman[189357]: 2025-12-01 10:03:34.5683295 +0000 UTC m=+0.149751685 container start d186943e7a551d06420dbff7b156bcfe5f7a3e4931b679bc7b2aade3972ffcfc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_kirch, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:03:34 np0005540825 podman[189357]: 2025-12-01 10:03:34.571800006 +0000 UTC m=+0.153222211 container attach d186943e7a551d06420dbff7b156bcfe5f7a3e4931b679bc7b2aade3972ffcfc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_kirch, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  1 05:03:34 np0005540825 youthful_kirch[189375]: 167 167
Dec  1 05:03:34 np0005540825 systemd[1]: libpod-d186943e7a551d06420dbff7b156bcfe5f7a3e4931b679bc7b2aade3972ffcfc.scope: Deactivated successfully.
Dec  1 05:03:34 np0005540825 podman[189357]: 2025-12-01 10:03:34.573145813 +0000 UTC m=+0.154567998 container died d186943e7a551d06420dbff7b156bcfe5f7a3e4931b679bc7b2aade3972ffcfc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_kirch, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1)
Dec  1 05:03:34 np0005540825 systemd[1]: var-lib-containers-storage-overlay-50cff163d578918449f4d825cdf16eace1c598e3c55cefa5941b8d1e872b1ca3-merged.mount: Deactivated successfully.
Dec  1 05:03:34 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:03:34 np0005540825 podman[189357]: 2025-12-01 10:03:34.611646099 +0000 UTC m=+0.193068264 container remove d186943e7a551d06420dbff7b156bcfe5f7a3e4931b679bc7b2aade3972ffcfc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec  1 05:03:34 np0005540825 systemd[1]: libpod-conmon-d186943e7a551d06420dbff7b156bcfe5f7a3e4931b679bc7b2aade3972ffcfc.scope: Deactivated successfully.
Dec  1 05:03:34 np0005540825 podman[189398]: 2025-12-01 10:03:34.840283716 +0000 UTC m=+0.050186420 container create b9d3de0b53236728501133c3576d43f4309a5b1af911ae6aec393171a65d8621 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_mirzakhani, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:03:34 np0005540825 systemd[1]: Started libpod-conmon-b9d3de0b53236728501133c3576d43f4309a5b1af911ae6aec393171a65d8621.scope.
Dec  1 05:03:34 np0005540825 podman[189398]: 2025-12-01 10:03:34.819068279 +0000 UTC m=+0.028971013 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:03:34 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:03:34 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bd0dd51af500ff7c4cbc144c2dd68219e4f4fa2311a9aa8a044257daec07c4c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:03:34 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bd0dd51af500ff7c4cbc144c2dd68219e4f4fa2311a9aa8a044257daec07c4c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:03:34 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bd0dd51af500ff7c4cbc144c2dd68219e4f4fa2311a9aa8a044257daec07c4c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:03:34 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bd0dd51af500ff7c4cbc144c2dd68219e4f4fa2311a9aa8a044257daec07c4c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:03:34 np0005540825 podman[189398]: 2025-12-01 10:03:34.946734642 +0000 UTC m=+0.156637406 container init b9d3de0b53236728501133c3576d43f4309a5b1af911ae6aec393171a65d8621 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_mirzakhani, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  1 05:03:34 np0005540825 podman[189398]: 2025-12-01 10:03:34.960469482 +0000 UTC m=+0.170372226 container start b9d3de0b53236728501133c3576d43f4309a5b1af911ae6aec393171a65d8621 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_mirzakhani, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec  1 05:03:34 np0005540825 podman[189398]: 2025-12-01 10:03:34.964570675 +0000 UTC m=+0.174473419 container attach b9d3de0b53236728501133c3576d43f4309a5b1af911ae6aec393171a65d8621 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  1 05:03:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:03:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:03:34.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:03:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:35 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c004530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:35 np0005540825 gifted_mirzakhani[189414]: {
Dec  1 05:03:35 np0005540825 gifted_mirzakhani[189414]:    "1": [
Dec  1 05:03:35 np0005540825 gifted_mirzakhani[189414]:        {
Dec  1 05:03:35 np0005540825 gifted_mirzakhani[189414]:            "devices": [
Dec  1 05:03:35 np0005540825 gifted_mirzakhani[189414]:                "/dev/loop3"
Dec  1 05:03:35 np0005540825 gifted_mirzakhani[189414]:            ],
Dec  1 05:03:35 np0005540825 gifted_mirzakhani[189414]:            "lv_name": "ceph_lv0",
Dec  1 05:03:35 np0005540825 gifted_mirzakhani[189414]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:03:35 np0005540825 gifted_mirzakhani[189414]:            "lv_size": "21470642176",
Dec  1 05:03:35 np0005540825 gifted_mirzakhani[189414]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 05:03:35 np0005540825 gifted_mirzakhani[189414]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:03:35 np0005540825 gifted_mirzakhani[189414]:            "name": "ceph_lv0",
Dec  1 05:03:35 np0005540825 gifted_mirzakhani[189414]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:03:35 np0005540825 gifted_mirzakhani[189414]:            "tags": {
Dec  1 05:03:35 np0005540825 gifted_mirzakhani[189414]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:03:35 np0005540825 gifted_mirzakhani[189414]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:03:35 np0005540825 gifted_mirzakhani[189414]:                "ceph.cephx_lockbox_secret": "",
Dec  1 05:03:35 np0005540825 gifted_mirzakhani[189414]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 05:03:35 np0005540825 gifted_mirzakhani[189414]:                "ceph.cluster_name": "ceph",
Dec  1 05:03:35 np0005540825 gifted_mirzakhani[189414]:                "ceph.crush_device_class": "",
Dec  1 05:03:35 np0005540825 gifted_mirzakhani[189414]:                "ceph.encrypted": "0",
Dec  1 05:03:35 np0005540825 gifted_mirzakhani[189414]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 05:03:35 np0005540825 gifted_mirzakhani[189414]:                "ceph.osd_id": "1",
Dec  1 05:03:35 np0005540825 gifted_mirzakhani[189414]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 05:03:35 np0005540825 gifted_mirzakhani[189414]:                "ceph.type": "block",
Dec  1 05:03:35 np0005540825 gifted_mirzakhani[189414]:                "ceph.vdo": "0",
Dec  1 05:03:35 np0005540825 gifted_mirzakhani[189414]:                "ceph.with_tpm": "0"
Dec  1 05:03:35 np0005540825 gifted_mirzakhani[189414]:            },
Dec  1 05:03:35 np0005540825 gifted_mirzakhani[189414]:            "type": "block",
Dec  1 05:03:35 np0005540825 gifted_mirzakhani[189414]:            "vg_name": "ceph_vg0"
Dec  1 05:03:35 np0005540825 gifted_mirzakhani[189414]:        }
Dec  1 05:03:35 np0005540825 gifted_mirzakhani[189414]:    ]
Dec  1 05:03:35 np0005540825 gifted_mirzakhani[189414]: }
Dec  1 05:03:35 np0005540825 systemd[1]: libpod-b9d3de0b53236728501133c3576d43f4309a5b1af911ae6aec393171a65d8621.scope: Deactivated successfully.
Dec  1 05:03:35 np0005540825 podman[189398]: 2025-12-01 10:03:35.273652649 +0000 UTC m=+0.483555373 container died b9d3de0b53236728501133c3576d43f4309a5b1af911ae6aec393171a65d8621 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_mirzakhani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True)
Dec  1 05:03:35 np0005540825 systemd[1]: var-lib-containers-storage-overlay-2bd0dd51af500ff7c4cbc144c2dd68219e4f4fa2311a9aa8a044257daec07c4c-merged.mount: Deactivated successfully.
Dec  1 05:03:35 np0005540825 podman[189398]: 2025-12-01 10:03:35.318917651 +0000 UTC m=+0.528820355 container remove b9d3de0b53236728501133c3576d43f4309a5b1af911ae6aec393171a65d8621 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_mirzakhani, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:03:35 np0005540825 systemd[1]: libpod-conmon-b9d3de0b53236728501133c3576d43f4309a5b1af911ae6aec393171a65d8621.scope: Deactivated successfully.
Dec  1 05:03:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:35 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c0029b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:35 np0005540825 podman[189527]: 2025-12-01 10:03:35.944861253 +0000 UTC m=+0.039527715 container create e493c1f1d54bf577802406597d12d46b04185cd9dfbbe6839011e8d89b7c9f12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_edison, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec  1 05:03:35 np0005540825 systemd[1]: Started libpod-conmon-e493c1f1d54bf577802406597d12d46b04185cd9dfbbe6839011e8d89b7c9f12.scope.
Dec  1 05:03:36 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:03:36 np0005540825 podman[189527]: 2025-12-01 10:03:35.927856742 +0000 UTC m=+0.022523234 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:03:36 np0005540825 podman[189527]: 2025-12-01 10:03:36.042133075 +0000 UTC m=+0.136799617 container init e493c1f1d54bf577802406597d12d46b04185cd9dfbbe6839011e8d89b7c9f12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_edison, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  1 05:03:36 np0005540825 podman[189527]: 2025-12-01 10:03:36.049977232 +0000 UTC m=+0.144643734 container start e493c1f1d54bf577802406597d12d46b04185cd9dfbbe6839011e8d89b7c9f12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_edison, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  1 05:03:36 np0005540825 podman[189527]: 2025-12-01 10:03:36.054233989 +0000 UTC m=+0.148900451 container attach e493c1f1d54bf577802406597d12d46b04185cd9dfbbe6839011e8d89b7c9f12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  1 05:03:36 np0005540825 objective_edison[189545]: 167 167
Dec  1 05:03:36 np0005540825 systemd[1]: libpod-e493c1f1d54bf577802406597d12d46b04185cd9dfbbe6839011e8d89b7c9f12.scope: Deactivated successfully.
Dec  1 05:03:36 np0005540825 conmon[189545]: conmon e493c1f1d54bf5778024 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e493c1f1d54bf577802406597d12d46b04185cd9dfbbe6839011e8d89b7c9f12.scope/container/memory.events
Dec  1 05:03:36 np0005540825 podman[189527]: 2025-12-01 10:03:36.058862408 +0000 UTC m=+0.153528910 container died e493c1f1d54bf577802406597d12d46b04185cd9dfbbe6839011e8d89b7c9f12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_edison, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:03:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:36 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd554003a20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:36 np0005540825 systemd[1]: var-lib-containers-storage-overlay-2456dc65e3f7d8d5052e0446ea591b7227ee8484836fa0f80afaabb55df7b8a3-merged.mount: Deactivated successfully.
Dec  1 05:03:36 np0005540825 podman[189527]: 2025-12-01 10:03:36.117876721 +0000 UTC m=+0.212543193 container remove e493c1f1d54bf577802406597d12d46b04185cd9dfbbe6839011e8d89b7c9f12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:03:36 np0005540825 systemd[1]: libpod-conmon-e493c1f1d54bf577802406597d12d46b04185cd9dfbbe6839011e8d89b7c9f12.scope: Deactivated successfully.
Dec  1 05:03:36 np0005540825 podman[189659]: 2025-12-01 10:03:36.274341781 +0000 UTC m=+0.043903636 container create 76c35081256025e651638876fac33db1664f57ddb0d32228944de4c9956f93a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_feistel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec  1 05:03:36 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v374: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 461 B/s rd, 0 op/s
Dec  1 05:03:36 np0005540825 systemd[1]: Started libpod-conmon-76c35081256025e651638876fac33db1664f57ddb0d32228944de4c9956f93a3.scope.
Dec  1 05:03:36 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:03:36 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f22d730d58e573c8b7ead6958a3312e75da7aaeb866b5360f5e83fee8e64e607/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:03:36 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f22d730d58e573c8b7ead6958a3312e75da7aaeb866b5360f5e83fee8e64e607/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:03:36 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f22d730d58e573c8b7ead6958a3312e75da7aaeb866b5360f5e83fee8e64e607/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:03:36 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f22d730d58e573c8b7ead6958a3312e75da7aaeb866b5360f5e83fee8e64e607/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:03:36 np0005540825 podman[189659]: 2025-12-01 10:03:36.256902208 +0000 UTC m=+0.026464093 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:03:36 np0005540825 podman[189659]: 2025-12-01 10:03:36.35383194 +0000 UTC m=+0.123393805 container init 76c35081256025e651638876fac33db1664f57ddb0d32228944de4c9956f93a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:03:36 np0005540825 podman[189706]: 2025-12-01 10:03:36.359145177 +0000 UTC m=+0.055679652 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  1 05:03:36 np0005540825 podman[189659]: 2025-12-01 10:03:36.360499985 +0000 UTC m=+0.130061840 container start 76c35081256025e651638876fac33db1664f57ddb0d32228944de4c9956f93a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  1 05:03:36 np0005540825 podman[189659]: 2025-12-01 10:03:36.363743305 +0000 UTC m=+0.133305160 container attach 76c35081256025e651638876fac33db1664f57ddb0d32228944de4c9956f93a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:03:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:03:36.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:36 np0005540825 lvm[190283]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 05:03:36 np0005540825 lvm[190283]: VG ceph_vg0 finished
Dec  1 05:03:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:03:36.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:37 np0005540825 magical_feistel[189723]: {}
Dec  1 05:03:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:37 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd560003260 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:37 np0005540825 systemd[1]: libpod-76c35081256025e651638876fac33db1664f57ddb0d32228944de4c9956f93a3.scope: Deactivated successfully.
Dec  1 05:03:37 np0005540825 podman[189659]: 2025-12-01 10:03:37.05885778 +0000 UTC m=+0.828419635 container died 76c35081256025e651638876fac33db1664f57ddb0d32228944de4c9956f93a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_feistel, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec  1 05:03:37 np0005540825 systemd[1]: libpod-76c35081256025e651638876fac33db1664f57ddb0d32228944de4c9956f93a3.scope: Consumed 1.008s CPU time.
Dec  1 05:03:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:03:37.071Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:03:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:03:37.071Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:03:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:03:37.071Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:03:37 np0005540825 systemd[1]: var-lib-containers-storage-overlay-f22d730d58e573c8b7ead6958a3312e75da7aaeb866b5360f5e83fee8e64e607-merged.mount: Deactivated successfully.
Dec  1 05:03:37 np0005540825 podman[189659]: 2025-12-01 10:03:37.105949723 +0000 UTC m=+0.875511578 container remove 76c35081256025e651638876fac33db1664f57ddb0d32228944de4c9956f93a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True)
Dec  1 05:03:37 np0005540825 systemd[1]: libpod-conmon-76c35081256025e651638876fac33db1664f57ddb0d32228944de4c9956f93a3.scope: Deactivated successfully.
Dec  1 05:03:37 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 05:03:37 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:03:37 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 05:03:37 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:03:37 np0005540825 systemd[1]: Stopping OpenSSH server daemon...
Dec  1 05:03:37 np0005540825 systemd[1]: sshd.service: Deactivated successfully.
Dec  1 05:03:37 np0005540825 systemd[1]: Stopped OpenSSH server daemon.
Dec  1 05:03:37 np0005540825 systemd[1]: sshd.service: Consumed 3.901s CPU time, read 32.0K from disk, written 64.0K to disk.
Dec  1 05:03:37 np0005540825 systemd[1]: Stopped target sshd-keygen.target.
Dec  1 05:03:37 np0005540825 systemd[1]: Stopping sshd-keygen.target...
Dec  1 05:03:37 np0005540825 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  1 05:03:37 np0005540825 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  1 05:03:37 np0005540825 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  1 05:03:37 np0005540825 systemd[1]: Reached target sshd-keygen.target.
Dec  1 05:03:37 np0005540825 systemd[1]: Starting OpenSSH server daemon...
Dec  1 05:03:37 np0005540825 systemd[1]: Started OpenSSH server daemon.
Dec  1 05:03:37 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:03:37 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:03:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:37 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c004530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:38 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c0029b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:38 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v375: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 276 B/s rd, 0 op/s
Dec  1 05:03:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:03:38.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:03:38.899Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:03:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:03:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:03:38.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:03:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:39 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd554003a20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_10:03:39
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'volumes', 'images', '.rgw.root', 'default.rgw.log', 'vms', 'cephfs.cephfs.data', '.nfs', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta']
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 05:03:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:03:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:03:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:03:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:39 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd560003260 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:03:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:03:39 np0005540825 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  1 05:03:39 np0005540825 systemd[1]: Starting man-db-cache-update.service...
Dec  1 05:03:39 np0005540825 systemd[1]: Reloading.
Dec  1 05:03:40 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:40 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c004530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:40 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 05:03:40 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 05:03:40 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v376: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 276 B/s rd, 0 op/s
Dec  1 05:03:40 np0005540825 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  1 05:03:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:03:40.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:03:40.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:41 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c0029b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:03:41] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Dec  1 05:03:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:03:41] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Dec  1 05:03:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:41 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd554003a20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:42 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:42 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd554003a20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:42 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v377: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 276 B/s rd, 0 op/s
Dec  1 05:03:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:03:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:03:42.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:03:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:03:43.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:43 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c004530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:43 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c0029b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:44 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd554003a20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:44 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v378: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:03:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:03:44.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:44 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:03:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:03:45.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:45 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd554003a20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:45 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c004530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:46 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c003d90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:46 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v379: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  1 05:03:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:03:46.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:03:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:03:47.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:03:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:47 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd590002100 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:03:47.074Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:03:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:47 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd554003a20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:48 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c004530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:48 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v380: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:03:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:03:48.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:03:48.901Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:03:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:03:49.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:49 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c003f30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:49 np0005540825 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  1 05:03:49 np0005540825 systemd[1]: Finished man-db-cache-update.service.
Dec  1 05:03:49 np0005540825 systemd[1]: man-db-cache-update.service: Consumed 12.109s CPU time.
Dec  1 05:03:49 np0005540825 systemd[1]: run-r658618482ab24794b15afa24139493ae.service: Deactivated successfully.
Dec  1 05:03:49 np0005540825 podman[199127]: 2025-12-01 10:03:49.494883988 +0000 UTC m=+0.121065662 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 05:03:49 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:03:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:49 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5900092a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:50 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:50 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd554003a20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:50 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v381: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:03:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:03:50.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:03:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:03:51.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:03:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:51 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c004530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:03:51] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Dec  1 05:03:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:03:51] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Dec  1 05:03:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:51 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c004850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:52 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5900092a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:52 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v382: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  1 05:03:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:03:52.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:03:53.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:53 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd554003a20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:53 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c004530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:54 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c004850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:54 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v383: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:03:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:03:54.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:03:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:03:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:03:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:03:55.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:55 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5900092a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:55 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd554003a20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:56 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c004530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:56 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v384: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  1 05:03:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:03:56.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:03:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:03:57.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:03:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:57 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c004850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:03:57.076Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:03:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:57 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd554003a20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:58 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5900092a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:58 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v385: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:03:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:03:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:03:58.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:03:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:03:58.902Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:03:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:03:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:03:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:03:59.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:03:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:59 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c004530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:03:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:03:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:03:59 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c004850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:00 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd554003a20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:00 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v386: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:04:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:04:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:04:00.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:04:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:04:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:04:01.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:04:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:01 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5900092a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:04:01] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec  1 05:04:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:04:01] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec  1 05:04:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:01 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c004530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:02 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c004850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:02 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v387: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  1 05:04:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:04:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:04:02.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:04:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:04:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:04:03.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:04:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:03 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd554003a20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:03 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5900092a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:04 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd584002920 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:04 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v388: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:04:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:04:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:04:04.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:04:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:04:04.556 163291 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:04:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:04:04.556 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:04:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:04:04.556 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:04:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:04:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:04:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:04:05.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:04:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:05 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd584002920 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:05 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd584002920 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:06 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd584002920 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:06 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v389: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  1 05:04:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:04:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:04:06.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:04:06 np0005540825 podman[199294]: 2025-12-01 10:04:06.780137827 +0000 UTC m=+0.071600873 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 05:04:07 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:07 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:04:07 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:04:07.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:04:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:07 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c004850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:04:07.079Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:04:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:04:07.079Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:04:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:04:07.079Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:04:07 np0005540825 python3.9[199339]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  1 05:04:07 np0005540825 systemd[1]: Reloading.
Dec  1 05:04:07 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 05:04:07 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 05:04:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/100407 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 05:04:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:07 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c004850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:08 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd580000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:08 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v390: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  1 05:04:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:04:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:04:08.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:04:08 np0005540825 python3.9[199533]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  1 05:04:08 np0005540825 systemd[1]: Reloading.
Dec  1 05:04:08 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 05:04:08 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 05:04:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:04:08.903Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:04:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:04:08.903Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:04:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:04:08.905Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:04:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:04:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:04:09.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:04:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:09 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c004530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:04:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:04:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:04:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:04:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:04:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:04:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:04:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:04:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:04:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:09 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c004850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:09 np0005540825 python3.9[199726]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  1 05:04:09 np0005540825 systemd[1]: Reloading.
Dec  1 05:04:09 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 05:04:09 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 05:04:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:10 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd554003c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:10 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v391: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  1 05:04:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:04:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:04:10.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:04:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:04:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:04:11.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:04:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:11 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd554003c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:11 np0005540825 python3.9[199917]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  1 05:04:11 np0005540825 systemd[1]: Reloading.
Dec  1 05:04:11 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 05:04:11 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 05:04:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:04:11] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Dec  1 05:04:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:04:11] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Dec  1 05:04:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:11 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c004530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:12 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c004850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:12 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v392: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:04:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:04:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:04:12.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:04:12 np0005540825 python3.9[200109]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 05:04:12 np0005540825 systemd[1]: Reloading.
Dec  1 05:04:12 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 05:04:12 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 05:04:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:04:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:04:13.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:04:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:13 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd580000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:13 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd580000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:14 np0005540825 python3.9[200301]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 05:04:14 np0005540825 systemd[1]: Reloading.
Dec  1 05:04:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:14 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c004530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:14 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 05:04:14 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 05:04:14 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v393: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  1 05:04:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:04:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:04:14.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:04:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:04:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:04:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:04:15.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:04:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:15 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c004850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:15 np0005540825 python3.9[200491]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 05:04:15 np0005540825 systemd[1]: Reloading.
Dec  1 05:04:15 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 05:04:15 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 05:04:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:15 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd580000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:16 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd580000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:16 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v394: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec  1 05:04:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:04:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:04:16.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:04:16 np0005540825 python3.9[200683]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 05:04:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:04:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:04:17.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:04:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:17 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c004530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:04:17.080Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:04:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:04:17.080Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:04:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:04:17.080Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:04:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:17 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c004850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:17 np0005540825 python3.9[200840]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 05:04:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:18 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd580000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:18 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:04:18 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v395: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec  1 05:04:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:04:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:04:18.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:04:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:04:18.906Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:04:18 np0005540825 systemd[1]: Reloading.
Dec  1 05:04:19 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 05:04:19 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 05:04:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:04:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:04:19.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:04:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:19 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd554003f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:04:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:19 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd554003f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:19 np0005540825 podman[201029]: 2025-12-01 10:04:19.910900561 +0000 UTC m=+0.123509089 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  1 05:04:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:20 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c004850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:20 np0005540825 python3.9[201075]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  1 05:04:20 np0005540825 systemd[1]: Reloading.
Dec  1 05:04:20 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v396: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec  1 05:04:20 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 05:04:20 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 05:04:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:04:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:04:20.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:04:20 np0005540825 systemd[1]: Listening on libvirt proxy daemon socket.
Dec  1 05:04:20 np0005540825 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Dec  1 05:04:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.002000056s ======
Dec  1 05:04:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:04:21.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000056s
Dec  1 05:04:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:21 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd580003720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:21 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:04:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:21 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:04:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:04:21] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Dec  1 05:04:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:04:21] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Dec  1 05:04:21 np0005540825 python3.9[201277]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 05:04:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:21 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd580003720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:22 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:22 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd554003f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:22 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v397: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec  1 05:04:22 np0005540825 python3.9[201433]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 05:04:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:04:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:04:22.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:04:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:23 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c004850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:04:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:04:23.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:04:23 np0005540825 python3.9[201588]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 05:04:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:23 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd580003720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:24 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd580003720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:24 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  1 05:04:24 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v398: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Dec  1 05:04:24 np0005540825 python3.9[201745]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 05:04:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:04:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:04:24.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:04:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:04:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:04:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:04:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:04:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:04:25.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:04:25 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:25 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd554004130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:25 np0005540825 python3.9[201900]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 05:04:25 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:25 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c004850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:26 np0005540825 python3.9[202057]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 05:04:26 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:26 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c004850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:26 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v399: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  1 05:04:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:04:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:04:26.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:04:26 np0005540825 python3.9[202212]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 05:04:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:04:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:04:27.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:04:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:04:27.082Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:04:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:27 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c0046b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:27 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd554004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:27 np0005540825 python3.9[202369]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 05:04:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:28 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c004850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:28 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v400: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Dec  1 05:04:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:04:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:04:28.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:04:28 np0005540825 python3.9[202524]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 05:04:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:04:28.907Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:04:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:04:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:04:29.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:04:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:29 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd580003720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:29 np0005540825 python3.9[202679]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 05:04:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:04:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/100429 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  1 05:04:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:29 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c0046b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:30 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:30 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd554004170 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:30 np0005540825 python3.9[202836]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 05:04:30 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v401: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Dec  1 05:04:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:04:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:04:30.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:04:30 np0005540825 auditd[703]: Audit daemon rotating log files
Dec  1 05:04:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:04:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:04:31.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:04:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:31 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c004850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:31 np0005540825 python3.9[202991]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 05:04:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:04:31] "GET /metrics HTTP/1.1" 200 48433 "" "Prometheus/2.51.0"
Dec  1 05:04:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:04:31] "GET /metrics HTTP/1.1" 200 48433 "" "Prometheus/2.51.0"
Dec  1 05:04:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:31 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd580003720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:31 np0005540825 python3.9[203148]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 05:04:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:32 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c0046b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:32 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v402: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Dec  1 05:04:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:04:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:04:32.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:04:32 np0005540825 python3.9[203303]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 05:04:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:04:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:04:33.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:04:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:33 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd554004190 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:33 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c004850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:34 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd580003720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:34 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v403: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec  1 05:04:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:04:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:04:34.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:04:34 np0005540825 python3.9[203460]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:04:34 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:04:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:04:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:04:35.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:04:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:35 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd55c0046b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:35 np0005540825 python3.9[203613]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:04:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:35 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5540041b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:36 np0005540825 python3.9[203766]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:04:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:36 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c004850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:36 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v404: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec  1 05:04:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:04:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:04:36.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:04:36 np0005540825 python3.9[203918]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:04:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:04:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:04:37.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:04:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:04:37.083Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:04:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:37 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd580003720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:37 np0005540825 podman[204043]: 2025-12-01 10:04:37.209507959 +0000 UTC m=+0.068483376 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 05:04:37 np0005540825 python3.9[204090]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:04:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:37 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd580003720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:38 np0005540825 python3.9[204293]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:04:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:38 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5540041d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:38 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v405: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec  1 05:04:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:04:38 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:04:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 05:04:38 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:04:38 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v406: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 304 B/s rd, 0 B/s wr, 0 op/s
Dec  1 05:04:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 05:04:38 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:04:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 05:04:38 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:04:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 05:04:38 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 05:04:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 05:04:38 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:04:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:04:38 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:04:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:04:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:04:38.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:04:38 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:04:38 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:04:38 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:04:38 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:04:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:04:38.909Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:04:38 np0005540825 python3.9[204528]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:04:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:04:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:04:39.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:04:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:39 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c004850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:39 np0005540825 podman[204573]: 2025-12-01 10:04:39.135972529 +0000 UTC m=+0.056796163 container create 286b9e9a4008d1e9b0fad40e56452471b676be92a1e1024b6fddcba782b788a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  1 05:04:39 np0005540825 systemd[1]: Started libpod-conmon-286b9e9a4008d1e9b0fad40e56452471b676be92a1e1024b6fddcba782b788a1.scope.
Dec  1 05:04:39 np0005540825 podman[204573]: 2025-12-01 10:04:39.113923649 +0000 UTC m=+0.034747343 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:04:39 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:04:39 np0005540825 podman[204573]: 2025-12-01 10:04:39.236501721 +0000 UTC m=+0.157325385 container init 286b9e9a4008d1e9b0fad40e56452471b676be92a1e1024b6fddcba782b788a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_khayyam, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:04:39 np0005540825 podman[204573]: 2025-12-01 10:04:39.244761749 +0000 UTC m=+0.165585393 container start 286b9e9a4008d1e9b0fad40e56452471b676be92a1e1024b6fddcba782b788a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:04:39 np0005540825 podman[204573]: 2025-12-01 10:04:39.248981416 +0000 UTC m=+0.169805060 container attach 286b9e9a4008d1e9b0fad40e56452471b676be92a1e1024b6fddcba782b788a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True)
Dec  1 05:04:39 np0005540825 affectionate_khayyam[204637]: 167 167
Dec  1 05:04:39 np0005540825 systemd[1]: libpod-286b9e9a4008d1e9b0fad40e56452471b676be92a1e1024b6fddcba782b788a1.scope: Deactivated successfully.
Dec  1 05:04:39 np0005540825 podman[204573]: 2025-12-01 10:04:39.251968299 +0000 UTC m=+0.172791963 container died 286b9e9a4008d1e9b0fad40e56452471b676be92a1e1024b6fddcba782b788a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_khayyam, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:04:39 np0005540825 systemd[1]: var-lib-containers-storage-overlay-90e6dd175ca0cfdc0a8f2db9ce5aef5a4d8bd78d63adc7ef518dcb2d516385a2-merged.mount: Deactivated successfully.
Dec  1 05:04:39 np0005540825 podman[204573]: 2025-12-01 10:04:39.303351881 +0000 UTC m=+0.224175565 container remove 286b9e9a4008d1e9b0fad40e56452471b676be92a1e1024b6fddcba782b788a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  1 05:04:39 np0005540825 systemd[1]: libpod-conmon-286b9e9a4008d1e9b0fad40e56452471b676be92a1e1024b6fddcba782b788a1.scope: Deactivated successfully.
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_10:04:39
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['vms', '.nfs', '.rgw.root', 'images', 'backups', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', '.mgr', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data']
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 05:04:39 np0005540825 podman[204708]: 2025-12-01 10:04:39.470669671 +0000 UTC m=+0.042267461 container create 5b900ed07fae5b59297c8eb9d7dcd0036e3d19322a705cb329d69474f61dc556 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_grothendieck, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  1 05:04:39 np0005540825 systemd[1]: Started libpod-conmon-5b900ed07fae5b59297c8eb9d7dcd0036e3d19322a705cb329d69474f61dc556.scope.
Dec  1 05:04:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:04:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:04:39 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:04:39 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3de1208e56301d1810270d0dd5bda5ca7685b037a021584ea5de6ec70ecd5172/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:04:39 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3de1208e56301d1810270d0dd5bda5ca7685b037a021584ea5de6ec70ecd5172/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:04:39 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3de1208e56301d1810270d0dd5bda5ca7685b037a021584ea5de6ec70ecd5172/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:04:39 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3de1208e56301d1810270d0dd5bda5ca7685b037a021584ea5de6ec70ecd5172/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:04:39 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3de1208e56301d1810270d0dd5bda5ca7685b037a021584ea5de6ec70ecd5172/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:04:39 np0005540825 podman[204708]: 2025-12-01 10:04:39.454053871 +0000 UTC m=+0.025651681 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:04:39 np0005540825 podman[204708]: 2025-12-01 10:04:39.567760718 +0000 UTC m=+0.139358518 container init 5b900ed07fae5b59297c8eb9d7dcd0036e3d19322a705cb329d69474f61dc556 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_grothendieck, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  1 05:04:39 np0005540825 podman[204708]: 2025-12-01 10:04:39.580194832 +0000 UTC m=+0.151792622 container start 5b900ed07fae5b59297c8eb9d7dcd0036e3d19322a705cb329d69474f61dc556 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_grothendieck, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  1 05:04:39 np0005540825 podman[204708]: 2025-12-01 10:04:39.584527452 +0000 UTC m=+0.156125242 container attach 5b900ed07fae5b59297c8eb9d7dcd0036e3d19322a705cb329d69474f61dc556 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_grothendieck, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:04:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:04:39 np0005540825 python3.9[204776]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764583478.2748265-1622-176905479618478/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:04:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:39 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c004850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:04:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:04:39 np0005540825 laughing_grothendieck[204778]: --> passed data devices: 0 physical, 1 LVM
Dec  1 05:04:39 np0005540825 laughing_grothendieck[204778]: --> All data devices are unavailable
Dec  1 05:04:39 np0005540825 systemd[1]: libpod-5b900ed07fae5b59297c8eb9d7dcd0036e3d19322a705cb329d69474f61dc556.scope: Deactivated successfully.
Dec  1 05:04:39 np0005540825 podman[204708]: 2025-12-01 10:04:39.992646716 +0000 UTC m=+0.564244516 container died 5b900ed07fae5b59297c8eb9d7dcd0036e3d19322a705cb329d69474f61dc556 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_grothendieck, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:04:40 np0005540825 systemd[1]: var-lib-containers-storage-overlay-3de1208e56301d1810270d0dd5bda5ca7685b037a021584ea5de6ec70ecd5172-merged.mount: Deactivated successfully.
Dec  1 05:04:40 np0005540825 podman[204708]: 2025-12-01 10:04:40.048574053 +0000 UTC m=+0.620171853 container remove 5b900ed07fae5b59297c8eb9d7dcd0036e3d19322a705cb329d69474f61dc556 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_grothendieck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  1 05:04:40 np0005540825 systemd[1]: libpod-conmon-5b900ed07fae5b59297c8eb9d7dcd0036e3d19322a705cb329d69474f61dc556.scope: Deactivated successfully.
Dec  1 05:04:40 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:40 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd580003720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:40 np0005540825 python3.9[204979]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:04:40 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v407: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 304 B/s rd, 0 B/s wr, 0 op/s
Dec  1 05:04:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:04:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:04:40.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:04:40 np0005540825 podman[205119]: 2025-12-01 10:04:40.765162113 +0000 UTC m=+0.055033944 container create e060d44c68c396b01cf5a6d6659281a4e362d05ad24b29d73f20e4d7321ec5f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:04:40 np0005540825 systemd[1]: Started libpod-conmon-e060d44c68c396b01cf5a6d6659281a4e362d05ad24b29d73f20e4d7321ec5f2.scope.
Dec  1 05:04:40 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:04:40 np0005540825 podman[205119]: 2025-12-01 10:04:40.744780829 +0000 UTC m=+0.034652690 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:04:40 np0005540825 podman[205119]: 2025-12-01 10:04:40.847979385 +0000 UTC m=+0.137851296 container init e060d44c68c396b01cf5a6d6659281a4e362d05ad24b29d73f20e4d7321ec5f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True)
Dec  1 05:04:40 np0005540825 podman[205119]: 2025-12-01 10:04:40.85610474 +0000 UTC m=+0.145976571 container start e060d44c68c396b01cf5a6d6659281a4e362d05ad24b29d73f20e4d7321ec5f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_mclean, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  1 05:04:40 np0005540825 podman[205119]: 2025-12-01 10:04:40.860606515 +0000 UTC m=+0.150478376 container attach e060d44c68c396b01cf5a6d6659281a4e362d05ad24b29d73f20e4d7321ec5f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_mclean, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:04:40 np0005540825 thirsty_mclean[205170]: 167 167
Dec  1 05:04:40 np0005540825 systemd[1]: libpod-e060d44c68c396b01cf5a6d6659281a4e362d05ad24b29d73f20e4d7321ec5f2.scope: Deactivated successfully.
Dec  1 05:04:40 np0005540825 podman[205119]: 2025-12-01 10:04:40.863481194 +0000 UTC m=+0.153353055 container died e060d44c68c396b01cf5a6d6659281a4e362d05ad24b29d73f20e4d7321ec5f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_mclean, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:04:40 np0005540825 systemd[1]: var-lib-containers-storage-overlay-06f70c724936abd9f6eec2ef2b9a20d73996753736c35e21ee5eb6b213c0d6c6-merged.mount: Deactivated successfully.
Dec  1 05:04:40 np0005540825 podman[205119]: 2025-12-01 10:04:40.920958665 +0000 UTC m=+0.210830516 container remove e060d44c68c396b01cf5a6d6659281a4e362d05ad24b29d73f20e4d7321ec5f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_mclean, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec  1 05:04:40 np0005540825 systemd[1]: libpod-conmon-e060d44c68c396b01cf5a6d6659281a4e362d05ad24b29d73f20e4d7321ec5f2.scope: Deactivated successfully.
Dec  1 05:04:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:04:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:04:41.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:04:41 np0005540825 python3.9[205193]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764583479.8706043-1622-163812714561512/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:04:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:41 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5540041d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:41 np0005540825 podman[205212]: 2025-12-01 10:04:41.102770176 +0000 UTC m=+0.052149744 container create 31e722d765109d916b1d1d5d8b826a892e4ca44c70497308df0594be1b3f7736 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_perlman, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Dec  1 05:04:41 np0005540825 systemd[1]: Started libpod-conmon-31e722d765109d916b1d1d5d8b826a892e4ca44c70497308df0594be1b3f7736.scope.
Dec  1 05:04:41 np0005540825 podman[205212]: 2025-12-01 10:04:41.077545058 +0000 UTC m=+0.026924636 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:04:41 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:04:41 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36a4c3596cec8a66090af13439ca58ec3ca99d6d3b971d4d684cab731f29b8cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:04:41 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36a4c3596cec8a66090af13439ca58ec3ca99d6d3b971d4d684cab731f29b8cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:04:41 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36a4c3596cec8a66090af13439ca58ec3ca99d6d3b971d4d684cab731f29b8cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:04:41 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36a4c3596cec8a66090af13439ca58ec3ca99d6d3b971d4d684cab731f29b8cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:04:41 np0005540825 podman[205212]: 2025-12-01 10:04:41.202458695 +0000 UTC m=+0.151838243 container init 31e722d765109d916b1d1d5d8b826a892e4ca44c70497308df0594be1b3f7736 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_perlman, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  1 05:04:41 np0005540825 podman[205212]: 2025-12-01 10:04:41.215493705 +0000 UTC m=+0.164873253 container start 31e722d765109d916b1d1d5d8b826a892e4ca44c70497308df0594be1b3f7736 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_perlman, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:04:41 np0005540825 podman[205212]: 2025-12-01 10:04:41.218584831 +0000 UTC m=+0.167964379 container attach 31e722d765109d916b1d1d5d8b826a892e4ca44c70497308df0594be1b3f7736 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_perlman, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:04:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:04:41] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec  1 05:04:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:04:41] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec  1 05:04:41 np0005540825 dazzling_perlman[205236]: {
Dec  1 05:04:41 np0005540825 dazzling_perlman[205236]:    "1": [
Dec  1 05:04:41 np0005540825 dazzling_perlman[205236]:        {
Dec  1 05:04:41 np0005540825 dazzling_perlman[205236]:            "devices": [
Dec  1 05:04:41 np0005540825 dazzling_perlman[205236]:                "/dev/loop3"
Dec  1 05:04:41 np0005540825 dazzling_perlman[205236]:            ],
Dec  1 05:04:41 np0005540825 dazzling_perlman[205236]:            "lv_name": "ceph_lv0",
Dec  1 05:04:41 np0005540825 dazzling_perlman[205236]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:04:41 np0005540825 dazzling_perlman[205236]:            "lv_size": "21470642176",
Dec  1 05:04:41 np0005540825 dazzling_perlman[205236]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 05:04:41 np0005540825 dazzling_perlman[205236]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:04:41 np0005540825 dazzling_perlman[205236]:            "name": "ceph_lv0",
Dec  1 05:04:41 np0005540825 dazzling_perlman[205236]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:04:41 np0005540825 dazzling_perlman[205236]:            "tags": {
Dec  1 05:04:41 np0005540825 dazzling_perlman[205236]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:04:41 np0005540825 dazzling_perlman[205236]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:04:41 np0005540825 dazzling_perlman[205236]:                "ceph.cephx_lockbox_secret": "",
Dec  1 05:04:41 np0005540825 dazzling_perlman[205236]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 05:04:41 np0005540825 dazzling_perlman[205236]:                "ceph.cluster_name": "ceph",
Dec  1 05:04:41 np0005540825 dazzling_perlman[205236]:                "ceph.crush_device_class": "",
Dec  1 05:04:41 np0005540825 dazzling_perlman[205236]:                "ceph.encrypted": "0",
Dec  1 05:04:41 np0005540825 dazzling_perlman[205236]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 05:04:41 np0005540825 dazzling_perlman[205236]:                "ceph.osd_id": "1",
Dec  1 05:04:41 np0005540825 dazzling_perlman[205236]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 05:04:41 np0005540825 dazzling_perlman[205236]:                "ceph.type": "block",
Dec  1 05:04:41 np0005540825 dazzling_perlman[205236]:                "ceph.vdo": "0",
Dec  1 05:04:41 np0005540825 dazzling_perlman[205236]:                "ceph.with_tpm": "0"
Dec  1 05:04:41 np0005540825 dazzling_perlman[205236]:            },
Dec  1 05:04:41 np0005540825 dazzling_perlman[205236]:            "type": "block",
Dec  1 05:04:41 np0005540825 dazzling_perlman[205236]:            "vg_name": "ceph_vg0"
Dec  1 05:04:41 np0005540825 dazzling_perlman[205236]:        }
Dec  1 05:04:41 np0005540825 dazzling_perlman[205236]:    ]
Dec  1 05:04:41 np0005540825 dazzling_perlman[205236]: }
Dec  1 05:04:41 np0005540825 systemd[1]: libpod-31e722d765109d916b1d1d5d8b826a892e4ca44c70497308df0594be1b3f7736.scope: Deactivated successfully.
Dec  1 05:04:41 np0005540825 podman[205212]: 2025-12-01 10:04:41.570996133 +0000 UTC m=+0.520375681 container died 31e722d765109d916b1d1d5d8b826a892e4ca44c70497308df0594be1b3f7736 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec  1 05:04:41 np0005540825 systemd[1]: var-lib-containers-storage-overlay-36a4c3596cec8a66090af13439ca58ec3ca99d6d3b971d4d684cab731f29b8cf-merged.mount: Deactivated successfully.
Dec  1 05:04:41 np0005540825 podman[205212]: 2025-12-01 10:04:41.617680485 +0000 UTC m=+0.567060033 container remove 31e722d765109d916b1d1d5d8b826a892e4ca44c70497308df0594be1b3f7736 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_perlman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  1 05:04:41 np0005540825 systemd[1]: libpod-conmon-31e722d765109d916b1d1d5d8b826a892e4ca44c70497308df0594be1b3f7736.scope: Deactivated successfully.
Dec  1 05:04:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:41 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd584002920 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:41 np0005540825 python3.9[205400]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:04:42 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:42 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c004850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:42 np0005540825 podman[205617]: 2025-12-01 10:04:42.373524811 +0000 UTC m=+0.053733398 container create d20274f0e98092723c585ae0e681e649bcfa8d3d26575a06a658def68297ab67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:04:42 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v408: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 304 B/s rd, 0 op/s
Dec  1 05:04:42 np0005540825 systemd[1]: Started libpod-conmon-d20274f0e98092723c585ae0e681e649bcfa8d3d26575a06a658def68297ab67.scope.
Dec  1 05:04:42 np0005540825 podman[205617]: 2025-12-01 10:04:42.351662906 +0000 UTC m=+0.031871503 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:04:42 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:04:42 np0005540825 podman[205617]: 2025-12-01 10:04:42.49274118 +0000 UTC m=+0.172949757 container init d20274f0e98092723c585ae0e681e649bcfa8d3d26575a06a658def68297ab67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_wilson, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 05:04:42 np0005540825 python3.9[205615]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764583481.282165-1622-89775226868625/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:04:42 np0005540825 podman[205617]: 2025-12-01 10:04:42.501659826 +0000 UTC m=+0.181868373 container start d20274f0e98092723c585ae0e681e649bcfa8d3d26575a06a658def68297ab67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_wilson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec  1 05:04:42 np0005540825 podman[205617]: 2025-12-01 10:04:42.504990689 +0000 UTC m=+0.185199266 container attach d20274f0e98092723c585ae0e681e649bcfa8d3d26575a06a658def68297ab67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:04:42 np0005540825 systemd[1]: libpod-d20274f0e98092723c585ae0e681e649bcfa8d3d26575a06a658def68297ab67.scope: Deactivated successfully.
Dec  1 05:04:42 np0005540825 keen_wilson[205634]: 167 167
Dec  1 05:04:42 np0005540825 conmon[205634]: conmon d20274f0e98092723c58 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d20274f0e98092723c585ae0e681e649bcfa8d3d26575a06a658def68297ab67.scope/container/memory.events
Dec  1 05:04:42 np0005540825 podman[205617]: 2025-12-01 10:04:42.509316478 +0000 UTC m=+0.189525035 container died d20274f0e98092723c585ae0e681e649bcfa8d3d26575a06a658def68297ab67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec  1 05:04:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:04:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:04:42.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:04:42 np0005540825 systemd[1]: var-lib-containers-storage-overlay-4ce9f7ff3c6865550b80f0f85f07d9a0390545a42f4c75a26ff015b457f3d974-merged.mount: Deactivated successfully.
Dec  1 05:04:42 np0005540825 podman[205617]: 2025-12-01 10:04:42.541842578 +0000 UTC m=+0.222051125 container remove d20274f0e98092723c585ae0e681e649bcfa8d3d26575a06a658def68297ab67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_wilson, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:04:42 np0005540825 systemd[1]: libpod-conmon-d20274f0e98092723c585ae0e681e649bcfa8d3d26575a06a658def68297ab67.scope: Deactivated successfully.
Dec  1 05:04:42 np0005540825 podman[205694]: 2025-12-01 10:04:42.713751806 +0000 UTC m=+0.057681968 container create 8e003e57edf608214b48d7572fad94fe12b4284edc82eb2bb9604f24cc6cdb71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:04:42 np0005540825 systemd[1]: Started libpod-conmon-8e003e57edf608214b48d7572fad94fe12b4284edc82eb2bb9604f24cc6cdb71.scope.
Dec  1 05:04:42 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:04:42 np0005540825 podman[205694]: 2025-12-01 10:04:42.695644845 +0000 UTC m=+0.039575037 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:04:42 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cab79fd05c9ec889487f746265435f937fab35c59725741a3ff716c6055fed60/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:04:42 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cab79fd05c9ec889487f746265435f937fab35c59725741a3ff716c6055fed60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:04:42 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cab79fd05c9ec889487f746265435f937fab35c59725741a3ff716c6055fed60/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:04:42 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cab79fd05c9ec889487f746265435f937fab35c59725741a3ff716c6055fed60/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:04:42 np0005540825 podman[205694]: 2025-12-01 10:04:42.807684055 +0000 UTC m=+0.151614307 container init 8e003e57edf608214b48d7572fad94fe12b4284edc82eb2bb9604f24cc6cdb71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  1 05:04:42 np0005540825 podman[205694]: 2025-12-01 10:04:42.819132302 +0000 UTC m=+0.163062474 container start 8e003e57edf608214b48d7572fad94fe12b4284edc82eb2bb9604f24cc6cdb71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:04:42 np0005540825 podman[205694]: 2025-12-01 10:04:42.822708351 +0000 UTC m=+0.166638613 container attach 8e003e57edf608214b48d7572fad94fe12b4284edc82eb2bb9604f24cc6cdb71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  1 05:04:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:04:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:04:43.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:04:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:43 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd580003720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:43 np0005540825 python3.9[205831]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:04:43 np0005540825 lvm[205987]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 05:04:43 np0005540825 lvm[205987]: VG ceph_vg0 finished
Dec  1 05:04:43 np0005540825 confident_mcclintock[205750]: {}
Dec  1 05:04:43 np0005540825 systemd[1]: libpod-8e003e57edf608214b48d7572fad94fe12b4284edc82eb2bb9604f24cc6cdb71.scope: Deactivated successfully.
Dec  1 05:04:43 np0005540825 systemd[1]: libpod-8e003e57edf608214b48d7572fad94fe12b4284edc82eb2bb9604f24cc6cdb71.scope: Consumed 1.276s CPU time.
Dec  1 05:04:43 np0005540825 podman[205694]: 2025-12-01 10:04:43.612339402 +0000 UTC m=+0.956269604 container died 8e003e57edf608214b48d7572fad94fe12b4284edc82eb2bb9604f24cc6cdb71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_mcclintock, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:04:43 np0005540825 systemd[1]: var-lib-containers-storage-overlay-cab79fd05c9ec889487f746265435f937fab35c59725741a3ff716c6055fed60-merged.mount: Deactivated successfully.
Dec  1 05:04:43 np0005540825 podman[205694]: 2025-12-01 10:04:43.668747833 +0000 UTC m=+1.012678015 container remove 8e003e57edf608214b48d7572fad94fe12b4284edc82eb2bb9604f24cc6cdb71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_mcclintock, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec  1 05:04:43 np0005540825 systemd[1]: libpod-conmon-8e003e57edf608214b48d7572fad94fe12b4284edc82eb2bb9604f24cc6cdb71.scope: Deactivated successfully.
Dec  1 05:04:43 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 05:04:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:43 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5540041f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:43 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:04:43 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 05:04:43 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:04:43 np0005540825 python3.9[206038]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764583482.654312-1622-3065897582702/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:04:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:44 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd584002920 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:44 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v409: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 304 B/s rd, 0 op/s
Dec  1 05:04:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:04:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:04:44.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:04:44 np0005540825 python3.9[206218]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:04:44 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:04:44 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:04:44 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:04:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:04:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:04:45.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:04:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:45 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c004850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:45 np0005540825 python3.9[206343]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764583484.0591507-1622-279047783699614/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:04:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:45 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd580003720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:45 np0005540825 python3.9[206497]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:04:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:46 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd554004210 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:46 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v410: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 304 B/s rd, 0 op/s
Dec  1 05:04:46 np0005540825 python3.9[206622]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764583485.3556204-1622-42626116843805/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:04:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:04:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:04:46.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:04:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:04:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:04:47.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:04:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:04:47.085Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:04:47 np0005540825 python3.9[206774]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:04:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:47 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd584002250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:47 np0005540825 python3.9[206899]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764583486.6082633-1622-272611837942154/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:04:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:47 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c004850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:48 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c004850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:48 np0005540825 python3.9[207053]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:04:48 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v411: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 304 B/s rd, 0 op/s
Dec  1 05:04:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:04:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:04:48.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:04:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:04:48.910Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:04:48 np0005540825 python3.9[207178]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764583487.851434-1622-29306670311932/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:04:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:04:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:04:49.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:04:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:49 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd560001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:49 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:04:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:49 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd590002100 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:49 np0005540825 python3.9[207332]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Dec  1 05:04:50 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:50 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5840036c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:50 np0005540825 podman[207381]: 2025-12-01 10:04:50.289368523 +0000 UTC m=+0.150730752 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  1 05:04:50 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v412: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:04:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:04:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:04:50.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:04:50 np0005540825 python3.9[207512]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:04:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:04:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:04:51.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:04:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:51 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c0049f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:51 np0005540825 python3.9[207665]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:04:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:04:51] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec  1 05:04:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:04:51] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec  1 05:04:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:51 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c0049f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:51 np0005540825 python3.9[207818]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:04:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:52 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5900092a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:52 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v413: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  1 05:04:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:04:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:04:52.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:04:52 np0005540825 python3.9[207970]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:04:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:04:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:04:53.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:04:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:53 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5900092a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:53 np0005540825 python3.9[208122]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:04:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:53 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5900092a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:53 np0005540825 python3.9[208276]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:04:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:54 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c0049f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:54 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v414: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:04:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:04:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:04:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:04:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:04:54.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:04:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:04:54 np0005540825 python3.9[208428]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:04:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:04:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:04:55.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:04:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:55 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5900092a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:55 np0005540825 python3.9[208581]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:04:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:55 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd560001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:56 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5840036c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:56 np0005540825 python3.9[208734]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:04:56 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v415: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  1 05:04:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:04:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:04:56.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:04:56 np0005540825 python3.9[208886]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:04:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:04:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:04:57.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:04:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:04:57.086Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:04:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:04:57.087Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:04:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:57 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c0049f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:57 np0005540825 python3.9[209040]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:04:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:57 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5900092a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:58 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd560001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:58 np0005540825 python3.9[209192]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:04:58 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v416: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:04:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:04:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:04:58.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:04:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:04:58.911Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:04:58 np0005540825 python3.9[209344]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:04:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:04:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:04:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:04:59.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:04:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:59 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5840036c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:59 np0005540825 python3.9[209498]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:04:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:04:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:04:59 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c0049f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:04:59 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Dec  1 05:04:59 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:04:59.781218) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  1 05:04:59 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Dec  1 05:04:59 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764583499781281, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 3861, "num_deletes": 502, "total_data_size": 7801409, "memory_usage": 7913576, "flush_reason": "Manual Compaction"}
Dec  1 05:04:59 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Dec  1 05:05:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:05:00 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5900092a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:05:00 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764583500238412, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 4370129, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13301, "largest_seqno": 17161, "table_properties": {"data_size": 4358823, "index_size": 6328, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3909, "raw_key_size": 30747, "raw_average_key_size": 19, "raw_value_size": 4331874, "raw_average_value_size": 2805, "num_data_blocks": 275, "num_entries": 1544, "num_filter_entries": 1544, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764583099, "oldest_key_time": 1764583099, "file_creation_time": 1764583499, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Dec  1 05:05:00 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 457227 microseconds, and 16252 cpu microseconds.
Dec  1 05:05:00 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 05:05:00 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:05:00.238454) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 4370129 bytes OK
Dec  1 05:05:00 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:05:00.238476) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Dec  1 05:05:00 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:05:00.388768) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Dec  1 05:05:00 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:05:00.388833) EVENT_LOG_v1 {"time_micros": 1764583500388819, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  1 05:05:00 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:05:00.388868) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  1 05:05:00 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 7785796, prev total WAL file size 7785796, number of live WAL files 2.
Dec  1 05:05:00 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:05:00 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:05:00.393225) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353032' seq:0, type:0; will stop at (end)
Dec  1 05:05:00 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  1 05:05:00 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(4267KB)], [32(13MB)]
Dec  1 05:05:00 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764583500393377, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 18163983, "oldest_snapshot_seqno": -1}
Dec  1 05:05:00 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v417: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:05:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:05:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:05:00.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:05:00 np0005540825 python3.9[209675]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:05:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:05:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:05:01.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:05:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:05:01 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd560001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:05:01 np0005540825 python3.9[209798]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764583500.0179596-2285-135156350929297/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:05:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:05:01] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Dec  1 05:05:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:05:01] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Dec  1 05:05:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:05:01 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5840036c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:05:01 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 5005 keys, 13509059 bytes, temperature: kUnknown
Dec  1 05:05:01 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764583501818593, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 13509059, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13474077, "index_size": 21368, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12549, "raw_key_size": 125350, "raw_average_key_size": 25, "raw_value_size": 13381687, "raw_average_value_size": 2673, "num_data_blocks": 892, "num_entries": 5005, "num_filter_entries": 5005, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764582410, "oldest_key_time": 0, "file_creation_time": 1764583500, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec  1 05:05:01 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 05:05:01 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:05:01.818953) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 13509059 bytes
Dec  1 05:05:01 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:05:01.825795) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 12.7 rd, 9.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.2, 13.2 +0.0 blob) out(12.9 +0.0 blob), read-write-amplify(7.2) write-amplify(3.1) OK, records in: 5834, records dropped: 829 output_compression: NoCompression
Dec  1 05:05:01 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:05:01.825872) EVENT_LOG_v1 {"time_micros": 1764583501825837, "job": 14, "event": "compaction_finished", "compaction_time_micros": 1425316, "compaction_time_cpu_micros": 36284, "output_level": 6, "num_output_files": 1, "total_output_size": 13509059, "num_input_records": 5834, "num_output_records": 5005, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  1 05:05:01 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:05:01 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764583501828214, "job": 14, "event": "table_file_deletion", "file_number": 34}
Dec  1 05:05:01 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:05:01 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764583501834892, "job": 14, "event": "table_file_deletion", "file_number": 32}
Dec  1 05:05:01 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:05:00.393043) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:05:01 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:05:01.835010) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:05:01 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:05:01.835020) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:05:01 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:05:01.835031) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:05:01 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:05:01.835036) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:05:01 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:05:01.835040) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:05:02 np0005540825 python3.9[209952]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:05:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:05:02 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5840036c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:05:02 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v418: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  1 05:05:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:05:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:05:02.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:05:02 np0005540825 python3.9[210075]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764583501.4394112-2285-73328357110841/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:05:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:05:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:05:03.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:05:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:05:03 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd58c0049f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:05:03 np0005540825 python3.9[210228]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:05:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:05:03 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd560001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:05:03 np0005540825 python3.9[210352]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764583502.7609262-2285-79394117721950/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:05:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[165837]: 01/12/2025 10:05:04 : epoch 692d6774 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5900092a0 fd 48 proxy ignored for local
Dec  1 05:05:04 np0005540825 kernel: ganesha.nfsd[204312]: segfault at 50 ip 00007fd63ea9932e sp 00007fd5f67fb210 error 4 in libntirpc.so.5.8[7fd63ea7e000+2c000] likely on CPU 6 (core 0, socket 6)
Dec  1 05:05:04 np0005540825 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec  1 05:05:04 np0005540825 systemd[1]: Started Process Core Dump (PID 210428/UID 0).
Dec  1 05:05:04 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v419: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:05:04 np0005540825 python3.9[210506]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:05:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:05:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:05:04.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:05:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:05:04.556 163291 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:05:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:05:04.557 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:05:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:05:04.557 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:05:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:05:05 np0005540825 python3.9[210629]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764583504.072158-2285-264087526081958/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:05:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:05:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:05:05.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:05:05 np0005540825 systemd-coredump[210430]: Process 165841 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 67:#012#0  0x00007fd63ea9932e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Dec  1 05:05:05 np0005540825 systemd[1]: systemd-coredump@2-210428-0.service: Deactivated successfully.
Dec  1 05:05:05 np0005540825 systemd[1]: systemd-coredump@2-210428-0.service: Consumed 1.122s CPU time.
Dec  1 05:05:05 np0005540825 podman[210786]: 2025-12-01 10:05:05.636812899 +0000 UTC m=+0.035742030 container died 712a4c1e4a3d4112359d36679955d704512aa624ff7c4e557acb04aadf264297 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:05:05 np0005540825 systemd[1]: var-lib-containers-storage-overlay-c15a33afbd212731f526e149f8e80099a20ea62e5282ee95fe97e02819597547-merged.mount: Deactivated successfully.
Dec  1 05:05:05 np0005540825 podman[210786]: 2025-12-01 10:05:05.784607009 +0000 UTC m=+0.183536180 container remove 712a4c1e4a3d4112359d36679955d704512aa624ff7c4e557acb04aadf264297 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  1 05:05:05 np0005540825 python3.9[210794]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:05:05 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Main process exited, code=exited, status=139/n/a
Dec  1 05:05:05 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Failed with result 'exit-code'.
Dec  1 05:05:05 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Consumed 1.910s CPU time.
Dec  1 05:05:06 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v420: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  1 05:05:06 np0005540825 python3.9[210954]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764583505.2490144-2285-159213789872072/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:05:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:05:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:05:06.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:05:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:05:07.087Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:05:07 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:07 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:05:07 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:05:07.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:05:07 np0005540825 python3.9[211106]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:05:07 np0005540825 podman[211203]: 2025-12-01 10:05:07.598661398 +0000 UTC m=+0.064754033 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  1 05:05:07 np0005540825 python3.9[211250]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764583506.650738-2285-222081506573711/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:05:08 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v421: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  1 05:05:08 np0005540825 python3.9[211402]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:05:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:05:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:05:08.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:05:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:05:08.912Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:05:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:05:08.913Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:05:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:05:08.914Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:05:09 np0005540825 python3.9[211525]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764583507.9656534-2285-2242173461672/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:05:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:05:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:05:09.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:05:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:05:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:05:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:05:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:05:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:05:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:05:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:05:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:05:09 np0005540825 python3.9[211679]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:05:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:05:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/100510 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 05:05:10 np0005540825 python3.9[211802]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764583509.219876-2285-239126444217454/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:05:10 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v422: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  1 05:05:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:05:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:05:10.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:05:11 np0005540825 python3.9[211954]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:05:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:05:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:05:11.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:05:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:05:11] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Dec  1 05:05:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:05:11] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Dec  1 05:05:11 np0005540825 python3.9[212079]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764583510.516367-2285-45119501166673/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:05:12 np0005540825 python3.9[212231]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:05:12 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v423: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  1 05:05:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:05:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:05:12.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:05:12 np0005540825 python3.9[212354]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764583511.766596-2285-143145201419906/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:05:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:05:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:05:13.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:05:13 np0005540825 python3.9[212508]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:05:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/100513 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 05:05:14 np0005540825 python3.9[212631]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764583513.0857456-2285-263033693957161/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:05:14 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v424: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec  1 05:05:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:05:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:05:14.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:05:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:05:14 np0005540825 python3.9[212783]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:05:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:05:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:05:15.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:05:15 np0005540825 python3.9[212908]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764583514.455511-2285-190309921580980/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:05:16 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Scheduled restart job, restart counter is at 3.
Dec  1 05:05:16 np0005540825 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.pytvsu for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 05:05:16 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Consumed 1.910s CPU time.
Dec  1 05:05:16 np0005540825 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.pytvsu for 365f19c2-81e5-5edd-b6b4-280555214d3a...
Dec  1 05:05:16 np0005540825 podman[213113]: 2025-12-01 10:05:16.326914503 +0000 UTC m=+0.047941578 container create b89244ce38320a0684f64abc70e92f2b994e811eeb3ae561ac553a1a8ad33acd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec  1 05:05:16 np0005540825 python3.9[213080]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:05:16 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fc196175fbb19d70d65645f738a60375d74ca6f4df33f94d84edc94f02bc7e4/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec  1 05:05:16 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fc196175fbb19d70d65645f738a60375d74ca6f4df33f94d84edc94f02bc7e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:05:16 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fc196175fbb19d70d65645f738a60375d74ca6f4df33f94d84edc94f02bc7e4/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:05:16 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fc196175fbb19d70d65645f738a60375d74ca6f4df33f94d84edc94f02bc7e4/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.pytvsu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:05:16 np0005540825 podman[213113]: 2025-12-01 10:05:16.306423496 +0000 UTC m=+0.027450561 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:05:16 np0005540825 podman[213113]: 2025-12-01 10:05:16.408761438 +0000 UTC m=+0.129788523 container init b89244ce38320a0684f64abc70e92f2b994e811eeb3ae561ac553a1a8ad33acd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:05:16 np0005540825 podman[213113]: 2025-12-01 10:05:16.414359863 +0000 UTC m=+0.135386928 container start b89244ce38320a0684f64abc70e92f2b994e811eeb3ae561ac553a1a8ad33acd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:05:16 np0005540825 bash[213113]: b89244ce38320a0684f64abc70e92f2b994e811eeb3ae561ac553a1a8ad33acd
Dec  1 05:05:16 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v425: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  1 05:05:16 np0005540825 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.pytvsu for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 05:05:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:16 : epoch 692d685c : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec  1 05:05:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:16 : epoch 692d685c : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec  1 05:05:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:16 : epoch 692d685c : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec  1 05:05:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:16 : epoch 692d685c : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec  1 05:05:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:16 : epoch 692d685c : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec  1 05:05:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:16 : epoch 692d685c : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec  1 05:05:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:16 : epoch 692d685c : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec  1 05:05:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:16 : epoch 692d685c : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:05:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:05:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:05:16.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:05:16 np0005540825 python3.9[213292]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764583515.7677999-2285-75899620162743/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:05:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:05:17.089Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:05:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:05:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:05:17.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:05:17 np0005540825 python3.9[213446]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:05:18 np0005540825 python3.9[213569]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764583517.0838075-2285-120686631116328/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:05:18 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v426: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec  1 05:05:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:05:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:05:18.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:05:18 np0005540825 python3.9[213719]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 05:05:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:05:18.914Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:05:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:05:18.914Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:05:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:05:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:05:19.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:05:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:05:19 np0005540825 python3.9[213900]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec  1 05:05:20 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v427: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec  1 05:05:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:05:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:05:20.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:05:21 np0005540825 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Dec  1 05:05:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:05:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:05:21.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:05:21 np0005540825 podman[213907]: 2025-12-01 10:05:21.315941683 +0000 UTC m=+0.151134354 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible)
Dec  1 05:05:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:05:21] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Dec  1 05:05:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:05:21] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Dec  1 05:05:22 np0005540825 python3.9[214085]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:05:22 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v428: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Dec  1 05:05:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:05:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:05:22.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:05:22 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:22 : epoch 692d685c : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:05:22 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:22 : epoch 692d685c : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:05:22 np0005540825 python3.9[214237]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:05:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:05:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:05:23.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:05:23 np0005540825 python3.9[214391]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:05:24 np0005540825 python3.9[214543]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:05:24 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v429: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Dec  1 05:05:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:05:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:05:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:05:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:05:24.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:05:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:05:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:05:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:05:25.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:05:25 np0005540825 python3.9[214695]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:05:25 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:25 : epoch 692d685c : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:05:25 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:25 : epoch 692d685c : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:05:25 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:25 : epoch 692d685c : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:05:25 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:25 : epoch 692d685c : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:05:25 np0005540825 python3.9[214849]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:05:26 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v430: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Dec  1 05:05:26 np0005540825 python3.9[215001]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:05:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:05:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:05:26.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:05:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:05:27.090Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:05:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:05:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:05:27.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:05:27 np0005540825 python3.9[215154]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:05:28 np0005540825 python3.9[215307]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:05:28 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v431: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Dec  1 05:05:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:05:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:05:28.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:05:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:28 : epoch 692d685c : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  1 05:05:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:28 : epoch 692d685c : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec  1 05:05:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:28 : epoch 692d685c : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec  1 05:05:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:28 : epoch 692d685c : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec  1 05:05:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:28 : epoch 692d685c : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec  1 05:05:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:28 : epoch 692d685c : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec  1 05:05:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:28 : epoch 692d685c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec  1 05:05:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:28 : epoch 692d685c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  1 05:05:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:28 : epoch 692d685c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  1 05:05:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:28 : epoch 692d685c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  1 05:05:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:28 : epoch 692d685c : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec  1 05:05:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:28 : epoch 692d685c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  1 05:05:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:28 : epoch 692d685c : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec  1 05:05:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:28 : epoch 692d685c : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec  1 05:05:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:28 : epoch 692d685c : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec  1 05:05:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:28 : epoch 692d685c : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec  1 05:05:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:28 : epoch 692d685c : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec  1 05:05:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:28 : epoch 692d685c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec  1 05:05:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:28 : epoch 692d685c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec  1 05:05:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:28 : epoch 692d685c : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec  1 05:05:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:28 : epoch 692d685c : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec  1 05:05:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:28 : epoch 692d685c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec  1 05:05:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:28 : epoch 692d685c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec  1 05:05:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:28 : epoch 692d685c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec  1 05:05:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:28 : epoch 692d685c : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  1 05:05:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:28 : epoch 692d685c : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec  1 05:05:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:28 : epoch 692d685c : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  1 05:05:28 np0005540825 python3.9[215459]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:05:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:05:28.915Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:05:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:05:28.915Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:05:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:05:28.915Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:05:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:05:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:05:29.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:05:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:29 : epoch 692d685c : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64d8000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:05:29 np0005540825 python3.9[215628]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 05:05:29 np0005540825 systemd[1]: Reloading.
Dec  1 05:05:29 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 05:05:29 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 05:05:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:05:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:29 : epoch 692d685c : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64cc0014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:05:30 np0005540825 systemd[1]: Starting libvirt logging daemon socket...
Dec  1 05:05:30 np0005540825 systemd[1]: Listening on libvirt logging daemon socket.
Dec  1 05:05:30 np0005540825 systemd[1]: Starting libvirt logging daemon admin socket...
Dec  1 05:05:30 np0005540825 systemd[1]: Listening on libvirt logging daemon admin socket.
Dec  1 05:05:30 np0005540825 systemd[1]: Starting libvirt logging daemon...
Dec  1 05:05:30 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:30 : epoch 692d685c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64b4000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:05:30 np0005540825 systemd[1]: Started libvirt logging daemon.
Dec  1 05:05:30 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v432: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Dec  1 05:05:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:05:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:05:30.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:05:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:05:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:05:31.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:05:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:31 : epoch 692d685c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64b0000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:05:31 np0005540825 python3.9[215824]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 05:05:31 np0005540825 systemd[1]: Reloading.
Dec  1 05:05:31 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 05:05:31 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 05:05:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:05:31] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec  1 05:05:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:05:31] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec  1 05:05:31 np0005540825 systemd[1]: Starting libvirt nodedev daemon socket...
Dec  1 05:05:31 np0005540825 systemd[1]: Listening on libvirt nodedev daemon socket.
Dec  1 05:05:31 np0005540825 systemd[1]: Starting libvirt nodedev daemon admin socket...
Dec  1 05:05:31 np0005540825 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Dec  1 05:05:31 np0005540825 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Dec  1 05:05:31 np0005540825 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Dec  1 05:05:31 np0005540825 systemd[1]: Starting libvirt nodedev daemon...
Dec  1 05:05:31 np0005540825 systemd[1]: Started libvirt nodedev daemon.
Dec  1 05:05:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:31 : epoch 692d685c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64d4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:05:32 np0005540825 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Dec  1 05:05:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/100532 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  1 05:05:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:32 : epoch 692d685c : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64cc0021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:05:32 np0005540825 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Dec  1 05:05:32 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v433: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Dec  1 05:05:32 np0005540825 python3.9[216043]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 05:05:32 np0005540825 systemd[1]: Reloading.
Dec  1 05:05:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:05:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:05:32.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:05:32 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 05:05:32 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 05:05:32 np0005540825 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Dec  1 05:05:32 np0005540825 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Dec  1 05:05:32 np0005540825 systemd[1]: Starting libvirt proxy daemon admin socket...
Dec  1 05:05:32 np0005540825 systemd[1]: Starting libvirt proxy daemon read-only socket...
Dec  1 05:05:32 np0005540825 systemd[1]: Listening on libvirt proxy daemon admin socket.
Dec  1 05:05:32 np0005540825 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Dec  1 05:05:32 np0005540825 systemd[1]: Starting libvirt proxy daemon...
Dec  1 05:05:32 np0005540825 systemd[1]: Started libvirt proxy daemon.
Dec  1 05:05:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:05:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:05:33.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:05:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:33 : epoch 692d685c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64b40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:05:33 np0005540825 python3.9[216266]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 05:05:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/100533 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  1 05:05:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:33 : epoch 692d685c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64b00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:05:33 np0005540825 systemd[1]: Reloading.
Dec  1 05:05:33 np0005540825 setroubleshoot[216014]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 8e3d316d-e702-49bc-be4d-5716f8fb9444
Dec  1 05:05:33 np0005540825 setroubleshoot[216014]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Dec  1 05:05:33 np0005540825 setroubleshoot[216014]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 8e3d316d-e702-49bc-be4d-5716f8fb9444
Dec  1 05:05:33 np0005540825 setroubleshoot[216014]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Dec  1 05:05:33 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 05:05:33 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 05:05:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:34 : epoch 692d685c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64b00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:05:34 np0005540825 systemd[1]: Listening on libvirt locking daemon socket.
Dec  1 05:05:34 np0005540825 systemd[1]: Starting libvirt QEMU daemon socket...
Dec  1 05:05:34 np0005540825 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec  1 05:05:34 np0005540825 systemd[1]: Starting Virtual Machine and Container Registration Service...
Dec  1 05:05:34 np0005540825 systemd[1]: Listening on libvirt QEMU daemon socket.
Dec  1 05:05:34 np0005540825 systemd[1]: Starting libvirt QEMU daemon admin socket...
Dec  1 05:05:34 np0005540825 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Dec  1 05:05:34 np0005540825 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Dec  1 05:05:34 np0005540825 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Dec  1 05:05:34 np0005540825 systemd[1]: Started Virtual Machine and Container Registration Service.
Dec  1 05:05:34 np0005540825 systemd[1]: Starting libvirt QEMU daemon...
Dec  1 05:05:34 np0005540825 systemd[1]: Started libvirt QEMU daemon.
Dec  1 05:05:34 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v434: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 597 B/s wr, 2 op/s
Dec  1 05:05:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:05:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:05:34.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:05:34 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:05:35 np0005540825 python3.9[216482]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 05:05:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:05:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:05:35.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:05:35 np0005540825 systemd[1]: Reloading.
Dec  1 05:05:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:35 : epoch 692d685c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64d40023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:05:35 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 05:05:35 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 05:05:35 np0005540825 systemd[1]: Starting libvirt secret daemon socket...
Dec  1 05:05:35 np0005540825 systemd[1]: Listening on libvirt secret daemon socket.
Dec  1 05:05:35 np0005540825 systemd[1]: Starting libvirt secret daemon admin socket...
Dec  1 05:05:35 np0005540825 systemd[1]: Starting libvirt secret daemon read-only socket...
Dec  1 05:05:35 np0005540825 systemd[1]: Listening on libvirt secret daemon admin socket.
Dec  1 05:05:35 np0005540825 systemd[1]: Listening on libvirt secret daemon read-only socket.
Dec  1 05:05:35 np0005540825 systemd[1]: Starting libvirt secret daemon...
Dec  1 05:05:35 np0005540825 systemd[1]: Started libvirt secret daemon.
Dec  1 05:05:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:35 : epoch 692d685c : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c8001230 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:05:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:36 : epoch 692d685c : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64cc0021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:05:36 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v435: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 597 B/s wr, 2 op/s
Dec  1 05:05:36 np0005540825 python3.9[216697]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:05:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:05:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:05:36.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:05:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:05:37.091Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:05:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:05:37.092Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:05:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:05:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:05:37.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:05:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:37 : epoch 692d685c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64b00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:05:37 np0005540825 python3.9[216849]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  1 05:05:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:37 : epoch 692d685c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64d40023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:05:37 np0005540825 podman[216975]: 2025-12-01 10:05:37.912646773 +0000 UTC m=+0.080470969 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent)
Dec  1 05:05:38 np0005540825 python3.9[217022]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 05:05:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[213130]: 01/12/2025 10:05:38 : epoch 692d685c : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64c8001d50 fd 39 proxy ignored for local
Dec  1 05:05:38 np0005540825 kernel: ganesha.nfsd[215465]: segfault at 50 ip 00007f6585f5b32e sp 00007f654b7fd210 error 4 in libntirpc.so.5.8[7f6585f40000+2c000] likely on CPU 3 (core 0, socket 3)
Dec  1 05:05:38 np0005540825 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec  1 05:05:38 np0005540825 systemd[1]: Started Process Core Dump (PID 217027/UID 0).
Dec  1 05:05:38 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v436: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Dec  1 05:05:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:05:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:05:38.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:05:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:05:38.916Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:05:38 np0005540825 python3.9[217178]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  1 05:05:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:05:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:05:39.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:05:39 np0005540825 systemd-coredump[217033]: Process 213153 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 45:#012#0  0x00007f6585f5b32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_10:05:39
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['images', 'vms', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'backups', 'default.rgw.log', '.nfs', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta']
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 05:05:39 np0005540825 systemd[1]: systemd-coredump@3-217027-0.service: Deactivated successfully.
Dec  1 05:05:39 np0005540825 systemd[1]: systemd-coredump@3-217027-0.service: Consumed 1.230s CPU time.
Dec  1 05:05:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:05:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:05:39 np0005540825 podman[217284]: 2025-12-01 10:05:39.554541878 +0000 UTC m=+0.041595926 container died b89244ce38320a0684f64abc70e92f2b994e811eeb3ae561ac553a1a8ad33acd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:05:39 np0005540825 systemd[1]: var-lib-containers-storage-overlay-7fc196175fbb19d70d65645f738a60375d74ca6f4df33f94d84edc94f02bc7e4-merged.mount: Deactivated successfully.
Dec  1 05:05:39 np0005540825 podman[217284]: 2025-12-01 10:05:39.592127417 +0000 UTC m=+0.079181445 container remove b89244ce38320a0684f64abc70e92f2b994e811eeb3ae561ac553a1a8ad33acd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:05:39 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Main process exited, code=exited, status=139/n/a
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:05:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:05:39 np0005540825 python3.9[217348]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:05:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:05:39 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Failed with result 'exit-code'.
Dec  1 05:05:39 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Consumed 1.710s CPU time.
Dec  1 05:05:40 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v437: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Dec  1 05:05:40 np0005540825 python3.9[217518]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764583539.3390412-3359-45068211029054/.source.xml follow=False _original_basename=secret.xml.j2 checksum=b828192784cecb28a4416a509fc39e7cc46c1495 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:05:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:05:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:05:40.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:05:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:05:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:05:41.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:05:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:05:41] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Dec  1 05:05:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:05:41] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Dec  1 05:05:41 np0005540825 python3.9[217671]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 365f19c2-81e5-5edd-b6b4-280555214d3a#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 05:05:42 np0005540825 python3.9[217834]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:05:42 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v438: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 170 B/s wr, 1 op/s
Dec  1 05:05:42 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Dec  1 05:05:42 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:05:42.505358) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  1 05:05:42 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Dec  1 05:05:42 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764583542505402, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 581, "num_deletes": 251, "total_data_size": 786570, "memory_usage": 798552, "flush_reason": "Manual Compaction"}
Dec  1 05:05:42 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Dec  1 05:05:42 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764583542514761, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 772229, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17162, "largest_seqno": 17742, "table_properties": {"data_size": 769107, "index_size": 1094, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7040, "raw_average_key_size": 18, "raw_value_size": 762989, "raw_average_value_size": 2040, "num_data_blocks": 49, "num_entries": 374, "num_filter_entries": 374, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764583501, "oldest_key_time": 1764583501, "file_creation_time": 1764583542, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec  1 05:05:42 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 9468 microseconds, and 4455 cpu microseconds.
Dec  1 05:05:42 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 05:05:42 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:05:42.514823) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 772229 bytes OK
Dec  1 05:05:42 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:05:42.514846) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Dec  1 05:05:42 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:05:42.516150) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Dec  1 05:05:42 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:05:42.516177) EVENT_LOG_v1 {"time_micros": 1764583542516170, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  1 05:05:42 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:05:42.516197) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  1 05:05:42 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 783464, prev total WAL file size 783464, number of live WAL files 2.
Dec  1 05:05:42 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:05:42 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:05:42.517082) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Dec  1 05:05:42 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  1 05:05:42 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(754KB)], [35(12MB)]
Dec  1 05:05:42 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764583542517144, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 14281288, "oldest_snapshot_seqno": -1}
Dec  1 05:05:42 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4869 keys, 12091433 bytes, temperature: kUnknown
Dec  1 05:05:42 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764583542603452, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 12091433, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12058422, "index_size": 19717, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12229, "raw_key_size": 123136, "raw_average_key_size": 25, "raw_value_size": 11969441, "raw_average_value_size": 2458, "num_data_blocks": 819, "num_entries": 4869, "num_filter_entries": 4869, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764582410, "oldest_key_time": 0, "file_creation_time": 1764583542, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Dec  1 05:05:42 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 05:05:42 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:05:42.603706) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 12091433 bytes
Dec  1 05:05:42 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:05:42.605278) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 165.3 rd, 140.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 12.9 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(34.2) write-amplify(15.7) OK, records in: 5379, records dropped: 510 output_compression: NoCompression
Dec  1 05:05:42 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:05:42.605299) EVENT_LOG_v1 {"time_micros": 1764583542605289, "job": 16, "event": "compaction_finished", "compaction_time_micros": 86388, "compaction_time_cpu_micros": 45199, "output_level": 6, "num_output_files": 1, "total_output_size": 12091433, "num_input_records": 5379, "num_output_records": 4869, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  1 05:05:42 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:05:42 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764583542605584, "job": 16, "event": "table_file_deletion", "file_number": 37}
Dec  1 05:05:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:05:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:05:42.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:05:42 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:05:42 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764583542608613, "job": 16, "event": "table_file_deletion", "file_number": 35}
Dec  1 05:05:42 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:05:42.516942) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:05:42 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:05:42.608713) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:05:42 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:05:42.608722) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:05:42 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:05:42.608725) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:05:42 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:05:42.608730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:05:42 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:05:42.608733) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:05:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:05:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:05:43.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:05:43 np0005540825 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Dec  1 05:05:43 np0005540825 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.004s CPU time.
Dec  1 05:05:43 np0005540825 systemd[1]: setroubleshootd.service: Deactivated successfully.
Dec  1 05:05:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/100544 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 05:05:44 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v439: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Dec  1 05:05:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:05:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:05:44.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:05:44 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:05:44 np0005540825 podman[218421]: 2025-12-01 10:05:44.93211563 +0000 UTC m=+0.069858770 container exec 04e54403a63b389bbbec1024baf07fe083822adf8debfd9c927a52a06f70b8a1 (image=quay.io/ceph/ceph:v19, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mon-compute-0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:05:44 np0005540825 python3.9[218406]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:05:45 np0005540825 podman[218421]: 2025-12-01 10:05:45.064812044 +0000 UTC m=+0.202555194 container exec_died 04e54403a63b389bbbec1024baf07fe083822adf8debfd9c927a52a06f70b8a1 (image=quay.io/ceph/ceph:v19, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mon-compute-0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  1 05:05:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:05:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:05:45.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:05:45 np0005540825 podman[218679]: 2025-12-01 10:05:45.522127613 +0000 UTC m=+0.072744226 container exec 6f6cf01cf4add71c311676e9908aca30b90b94b7eb4eed46b57a6078721d520f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 05:05:45 np0005540825 podman[218679]: 2025-12-01 10:05:45.5280912 +0000 UTC m=+0.078707823 container exec_died 6f6cf01cf4add71c311676e9908aca30b90b94b7eb4eed46b57a6078721d520f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 05:05:45 np0005540825 python3.9[218704]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:05:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/100545 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 05:05:46 np0005540825 podman[218945]: 2025-12-01 10:05:46.08457825 +0000 UTC m=+0.060748100 container exec 0ce6b28b78cdc773acbae8987038033199adf9f2d08be5b101f663b41bdbf569 (image=quay.io/ceph/haproxy:2.3, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd)
Dec  1 05:05:46 np0005540825 podman[218945]: 2025-12-01 10:05:46.131795884 +0000 UTC m=+0.107965744 container exec_died 0ce6b28b78cdc773acbae8987038033199adf9f2d08be5b101f663b41bdbf569 (image=quay.io/ceph/haproxy:2.3, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd)
Dec  1 05:05:46 np0005540825 python3.9[218953]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764583545.198934-3524-32609357557932/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:05:46 np0005540825 podman[219033]: 2025-12-01 10:05:46.384785493 +0000 UTC m=+0.072197781 container exec a5bc912f6140365e8fac95a046d1f1cd854ca55aaf2d1e10454f7fa95d0346ac (image=quay.io/ceph/keepalived:2.2.4, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-nfs-cephfs-compute-0-gzwexr, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, vcs-type=git, distribution-scope=public, release=1793, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, description=keepalived for Ceph, vendor=Red Hat, Inc.)
Dec  1 05:05:46 np0005540825 podman[219033]: 2025-12-01 10:05:46.423493492 +0000 UTC m=+0.110905740 container exec_died a5bc912f6140365e8fac95a046d1f1cd854ca55aaf2d1e10454f7fa95d0346ac (image=quay.io/ceph/keepalived:2.2.4, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-nfs-cephfs-compute-0-gzwexr, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, version=2.2.4, build-date=2023-02-22T09:23:20, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, vcs-type=git)
Dec  1 05:05:46 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v440: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Dec  1 05:05:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:05:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:05:46.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:05:46 np0005540825 podman[219133]: 2025-12-01 10:05:46.75880833 +0000 UTC m=+0.064519680 container exec fa43ac72a8a6a2863fa517cbc53fe118714aa74f1d9b620c1e40de173c893c3c (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 05:05:46 np0005540825 podman[219133]: 2025-12-01 10:05:46.809515415 +0000 UTC m=+0.115226755 container exec_died fa43ac72a8a6a2863fa517cbc53fe118714aa74f1d9b620c1e40de173c893c3c (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 05:05:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:05:47.092Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:05:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:05:47.094Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:05:47 np0005540825 podman[219303]: 2025-12-01 10:05:47.113517038 +0000 UTC m=+0.087047893 container exec 2e1200771a4f85a610f0f173c3c2000346e63d85e37d815d1d1db9886b52c917 (image=quay.io/ceph/grafana:10.4.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 05:05:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:05:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:05:47.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:05:47 np0005540825 python3.9[219304]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:05:47 np0005540825 podman[219303]: 2025-12-01 10:05:47.320823506 +0000 UTC m=+0.294354271 container exec_died 2e1200771a4f85a610f0f173c3c2000346e63d85e37d815d1d1db9886b52c917 (image=quay.io/ceph/grafana:10.4.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 05:05:47 np0005540825 podman[219566]: 2025-12-01 10:05:47.846370201 +0000 UTC m=+0.072246983 container exec f4d1dfb280c04c299aa8be4743fa19bf2fe3a6e302067b3bdeba477b91d1a552 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 05:05:47 np0005540825 podman[219566]: 2025-12-01 10:05:47.937135111 +0000 UTC m=+0.163011823 container exec_died f4d1dfb280c04c299aa8be4743fa19bf2fe3a6e302067b3bdeba477b91d1a552 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 05:05:47 np0005540825 python3.9[219580]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:05:48 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 05:05:48 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:05:48 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 05:05:48 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:05:48 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v441: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec  1 05:05:48 np0005540825 python3.9[219740]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:05:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:05:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:05:48.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:05:48 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:05:48 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:05:48 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 05:05:48 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:05:48 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v442: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 97 B/s rd, 0 op/s
Dec  1 05:05:48 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v443: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 121 B/s rd, 0 op/s
Dec  1 05:05:48 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 05:05:48 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:05:48 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 05:05:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:05:48.918Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:05:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:05:48.919Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:05:48 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:05:48 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 05:05:48 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 05:05:48 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 05:05:48 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:05:48 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:05:48 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:05:49 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:05:49 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:05:49 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:05:49 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:05:49 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:05:49 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:05:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:05:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:05:49.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:05:49 np0005540825 python3.9[219954]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:05:49 np0005540825 podman[220080]: 2025-12-01 10:05:49.526545625 +0000 UTC m=+0.053272134 container create d3e70f0157deb9f86d5ae3d357486429fbe3b5853bc0faa7944f53bf1c8e6967 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_montalcini, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  1 05:05:49 np0005540825 podman[220080]: 2025-12-01 10:05:49.509674331 +0000 UTC m=+0.036400880 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:05:49 np0005540825 systemd[1]: Started libpod-conmon-d3e70f0157deb9f86d5ae3d357486429fbe3b5853bc0faa7944f53bf1c8e6967.scope.
Dec  1 05:05:49 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:05:49 np0005540825 podman[220080]: 2025-12-01 10:05:49.665876893 +0000 UTC m=+0.192603492 container init d3e70f0157deb9f86d5ae3d357486429fbe3b5853bc0faa7944f53bf1c8e6967 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_montalcini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:05:49 np0005540825 podman[220080]: 2025-12-01 10:05:49.676110542 +0000 UTC m=+0.202837051 container start d3e70f0157deb9f86d5ae3d357486429fbe3b5853bc0faa7944f53bf1c8e6967 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_montalcini, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  1 05:05:49 np0005540825 podman[220080]: 2025-12-01 10:05:49.679649415 +0000 UTC m=+0.206375974 container attach d3e70f0157deb9f86d5ae3d357486429fbe3b5853bc0faa7944f53bf1c8e6967 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_montalcini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:05:49 np0005540825 heuristic_montalcini[220112]: 167 167
Dec  1 05:05:49 np0005540825 systemd[1]: libpod-d3e70f0157deb9f86d5ae3d357486429fbe3b5853bc0faa7944f53bf1c8e6967.scope: Deactivated successfully.
Dec  1 05:05:49 np0005540825 podman[220080]: 2025-12-01 10:05:49.681390541 +0000 UTC m=+0.208117070 container died d3e70f0157deb9f86d5ae3d357486429fbe3b5853bc0faa7944f53bf1c8e6967 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:05:49 np0005540825 python3.9[220106]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.63ivyhkj recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:05:49 np0005540825 systemd[1]: var-lib-containers-storage-overlay-77da9064241baa7e3051d5664648afa0e2130ed6495835b4801f7040421b2281-merged.mount: Deactivated successfully.
Dec  1 05:05:49 np0005540825 podman[220080]: 2025-12-01 10:05:49.721735783 +0000 UTC m=+0.248462302 container remove d3e70f0157deb9f86d5ae3d357486429fbe3b5853bc0faa7944f53bf1c8e6967 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_montalcini, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:05:49 np0005540825 systemd[1]: libpod-conmon-d3e70f0157deb9f86d5ae3d357486429fbe3b5853bc0faa7944f53bf1c8e6967.scope: Deactivated successfully.
Dec  1 05:05:49 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:05:49 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Scheduled restart job, restart counter is at 4.
Dec  1 05:05:49 np0005540825 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.pytvsu for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 05:05:49 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Consumed 1.710s CPU time.
Dec  1 05:05:49 np0005540825 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.pytvsu for 365f19c2-81e5-5edd-b6b4-280555214d3a...
Dec  1 05:05:49 np0005540825 podman[220159]: 2025-12-01 10:05:49.905932323 +0000 UTC m=+0.063493443 container create ac2952676af05343ca7e2a34c2abfb0e85e56ad49872a6f0cad5952866aebc99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_benz, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:05:49 np0005540825 systemd[1]: Started libpod-conmon-ac2952676af05343ca7e2a34c2abfb0e85e56ad49872a6f0cad5952866aebc99.scope.
Dec  1 05:05:49 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:05:49 np0005540825 podman[220159]: 2025-12-01 10:05:49.881479039 +0000 UTC m=+0.039040169 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:05:49 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76a3f24bdbc2290ee0090140ead4a3f14051f6a1f996fcc82638da393209a111/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:05:49 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76a3f24bdbc2290ee0090140ead4a3f14051f6a1f996fcc82638da393209a111/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:05:49 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76a3f24bdbc2290ee0090140ead4a3f14051f6a1f996fcc82638da393209a111/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:05:49 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76a3f24bdbc2290ee0090140ead4a3f14051f6a1f996fcc82638da393209a111/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:05:49 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76a3f24bdbc2290ee0090140ead4a3f14051f6a1f996fcc82638da393209a111/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:05:49 np0005540825 podman[220159]: 2025-12-01 10:05:49.996443254 +0000 UTC m=+0.154004394 container init ac2952676af05343ca7e2a34c2abfb0e85e56ad49872a6f0cad5952866aebc99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_benz, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:05:50 np0005540825 podman[220159]: 2025-12-01 10:05:50.012796355 +0000 UTC m=+0.170357485 container start ac2952676af05343ca7e2a34c2abfb0e85e56ad49872a6f0cad5952866aebc99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_benz, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:05:50 np0005540825 podman[220159]: 2025-12-01 10:05:50.01638793 +0000 UTC m=+0.173949060 container attach ac2952676af05343ca7e2a34c2abfb0e85e56ad49872a6f0cad5952866aebc99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_benz, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  1 05:05:50 np0005540825 podman[220279]: 2025-12-01 10:05:50.135132296 +0000 UTC m=+0.052936665 container create 10befa2b4a4711a0f07f1b41908bc8a32640288babf26b7dc6df679048c217dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:05:50 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43d489ed16a71826fc6f7b1f8da8cee17c6a1ee81a630cf453eabda0edf732ce/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec  1 05:05:50 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43d489ed16a71826fc6f7b1f8da8cee17c6a1ee81a630cf453eabda0edf732ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:05:50 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43d489ed16a71826fc6f7b1f8da8cee17c6a1ee81a630cf453eabda0edf732ce/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:05:50 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43d489ed16a71826fc6f7b1f8da8cee17c6a1ee81a630cf453eabda0edf732ce/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.pytvsu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:05:50 np0005540825 podman[220279]: 2025-12-01 10:05:50.199827179 +0000 UTC m=+0.117631578 container init 10befa2b4a4711a0f07f1b41908bc8a32640288babf26b7dc6df679048c217dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  1 05:05:50 np0005540825 podman[220279]: 2025-12-01 10:05:50.115803577 +0000 UTC m=+0.033607986 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:05:50 np0005540825 podman[220279]: 2025-12-01 10:05:50.21048925 +0000 UTC m=+0.128293619 container start 10befa2b4a4711a0f07f1b41908bc8a32640288babf26b7dc6df679048c217dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:05:50 np0005540825 bash[220279]: 10befa2b4a4711a0f07f1b41908bc8a32640288babf26b7dc6df679048c217dd
Dec  1 05:05:50 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:05:50 : epoch 692d687e : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec  1 05:05:50 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:05:50 : epoch 692d687e : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec  1 05:05:50 np0005540825 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.pytvsu for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 05:05:50 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:05:50 : epoch 692d687e : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec  1 05:05:50 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:05:50 : epoch 692d687e : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec  1 05:05:50 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:05:50 : epoch 692d687e : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec  1 05:05:50 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:05:50 : epoch 692d687e : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec  1 05:05:50 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:05:50 : epoch 692d687e : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec  1 05:05:50 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:05:50 : epoch 692d687e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:05:50 np0005540825 angry_benz[220210]: --> passed data devices: 0 physical, 1 LVM
Dec  1 05:05:50 np0005540825 angry_benz[220210]: --> All data devices are unavailable
Dec  1 05:05:50 np0005540825 systemd[1]: libpod-ac2952676af05343ca7e2a34c2abfb0e85e56ad49872a6f0cad5952866aebc99.scope: Deactivated successfully.
Dec  1 05:05:50 np0005540825 conmon[220210]: conmon ac2952676af05343ca7e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ac2952676af05343ca7e2a34c2abfb0e85e56ad49872a6f0cad5952866aebc99.scope/container/memory.events
Dec  1 05:05:50 np0005540825 podman[220159]: 2025-12-01 10:05:50.408943314 +0000 UTC m=+0.566504414 container died ac2952676af05343ca7e2a34c2abfb0e85e56ad49872a6f0cad5952866aebc99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_benz, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  1 05:05:50 np0005540825 python3.9[220383]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:05:50 np0005540825 systemd[1]: var-lib-containers-storage-overlay-76a3f24bdbc2290ee0090140ead4a3f14051f6a1f996fcc82638da393209a111-merged.mount: Deactivated successfully.
Dec  1 05:05:50 np0005540825 podman[220159]: 2025-12-01 10:05:50.457557644 +0000 UTC m=+0.615118754 container remove ac2952676af05343ca7e2a34c2abfb0e85e56ad49872a6f0cad5952866aebc99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_benz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:05:50 np0005540825 systemd[1]: libpod-conmon-ac2952676af05343ca7e2a34c2abfb0e85e56ad49872a6f0cad5952866aebc99.scope: Deactivated successfully.
Dec  1 05:05:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:05:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:05:50.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:05:50 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v444: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 605 B/s rd, 121 B/s wr, 0 op/s
Dec  1 05:05:50 np0005540825 python3.9[220555]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:05:51 np0005540825 podman[220621]: 2025-12-01 10:05:51.099316849 +0000 UTC m=+0.043458735 container create 00efcb200e95b8e0cd8cc61fa21d2ca1503d0266887857b48b3f1501eebd0b45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_faraday, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  1 05:05:51 np0005540825 systemd[1]: Started libpod-conmon-00efcb200e95b8e0cd8cc61fa21d2ca1503d0266887857b48b3f1501eebd0b45.scope.
Dec  1 05:05:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:05:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:05:51.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:05:51 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:05:51 np0005540825 podman[220621]: 2025-12-01 10:05:51.080837203 +0000 UTC m=+0.024979129 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:05:51 np0005540825 podman[220621]: 2025-12-01 10:05:51.19352977 +0000 UTC m=+0.137671736 container init 00efcb200e95b8e0cd8cc61fa21d2ca1503d0266887857b48b3f1501eebd0b45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:05:51 np0005540825 podman[220621]: 2025-12-01 10:05:51.206805599 +0000 UTC m=+0.150947515 container start 00efcb200e95b8e0cd8cc61fa21d2ca1503d0266887857b48b3f1501eebd0b45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:05:51 np0005540825 podman[220621]: 2025-12-01 10:05:51.211165154 +0000 UTC m=+0.155307040 container attach 00efcb200e95b8e0cd8cc61fa21d2ca1503d0266887857b48b3f1501eebd0b45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_faraday, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:05:51 np0005540825 hardcore_faraday[220639]: 167 167
Dec  1 05:05:51 np0005540825 systemd[1]: libpod-00efcb200e95b8e0cd8cc61fa21d2ca1503d0266887857b48b3f1501eebd0b45.scope: Deactivated successfully.
Dec  1 05:05:51 np0005540825 podman[220621]: 2025-12-01 10:05:51.215267862 +0000 UTC m=+0.159409748 container died 00efcb200e95b8e0cd8cc61fa21d2ca1503d0266887857b48b3f1501eebd0b45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_faraday, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:05:51 np0005540825 systemd[1]: var-lib-containers-storage-overlay-7d966291db9fee812f1493ef880c36d20c4d041eea5e7e8733f6fdf58cafc684-merged.mount: Deactivated successfully.
Dec  1 05:05:51 np0005540825 podman[220621]: 2025-12-01 10:05:51.267828016 +0000 UTC m=+0.211969902 container remove 00efcb200e95b8e0cd8cc61fa21d2ca1503d0266887857b48b3f1501eebd0b45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_faraday, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:05:51 np0005540825 systemd[1]: libpod-conmon-00efcb200e95b8e0cd8cc61fa21d2ca1503d0266887857b48b3f1501eebd0b45.scope: Deactivated successfully.
Dec  1 05:05:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:05:51] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Dec  1 05:05:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:05:51] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Dec  1 05:05:51 np0005540825 podman[220740]: 2025-12-01 10:05:51.481144922 +0000 UTC m=+0.055718818 container create 60549e417043d9fdffeb301f7e9f1d84e2ae8676fd7cdc2d7e514282cfce8783 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:05:51 np0005540825 systemd[1]: Started libpod-conmon-60549e417043d9fdffeb301f7e9f1d84e2ae8676fd7cdc2d7e514282cfce8783.scope.
Dec  1 05:05:51 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:05:51 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e00e05873eb567e8a40531b98bddde5c83bbd8805eb2ea7ab1ecd0012fe472ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:05:51 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e00e05873eb567e8a40531b98bddde5c83bbd8805eb2ea7ab1ecd0012fe472ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:05:51 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e00e05873eb567e8a40531b98bddde5c83bbd8805eb2ea7ab1ecd0012fe472ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:05:51 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e00e05873eb567e8a40531b98bddde5c83bbd8805eb2ea7ab1ecd0012fe472ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:05:51 np0005540825 podman[220740]: 2025-12-01 10:05:51.553980909 +0000 UTC m=+0.128554835 container init 60549e417043d9fdffeb301f7e9f1d84e2ae8676fd7cdc2d7e514282cfce8783 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_jemison, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:05:51 np0005540825 podman[220740]: 2025-12-01 10:05:51.461851014 +0000 UTC m=+0.036424930 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:05:51 np0005540825 podman[220740]: 2025-12-01 10:05:51.562414461 +0000 UTC m=+0.136988357 container start 60549e417043d9fdffeb301f7e9f1d84e2ae8676fd7cdc2d7e514282cfce8783 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_jemison, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  1 05:05:51 np0005540825 podman[220740]: 2025-12-01 10:05:51.565904893 +0000 UTC m=+0.140478809 container attach 60549e417043d9fdffeb301f7e9f1d84e2ae8676fd7cdc2d7e514282cfce8783 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_jemison, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:05:51 np0005540825 podman[220778]: 2025-12-01 10:05:51.641890753 +0000 UTC m=+0.124014125 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:05:51 np0005540825 python3.9[220831]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 05:05:51 np0005540825 stupefied_jemison[220782]: {
Dec  1 05:05:51 np0005540825 stupefied_jemison[220782]:    "1": [
Dec  1 05:05:51 np0005540825 stupefied_jemison[220782]:        {
Dec  1 05:05:51 np0005540825 stupefied_jemison[220782]:            "devices": [
Dec  1 05:05:51 np0005540825 stupefied_jemison[220782]:                "/dev/loop3"
Dec  1 05:05:51 np0005540825 stupefied_jemison[220782]:            ],
Dec  1 05:05:51 np0005540825 stupefied_jemison[220782]:            "lv_name": "ceph_lv0",
Dec  1 05:05:51 np0005540825 stupefied_jemison[220782]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:05:51 np0005540825 stupefied_jemison[220782]:            "lv_size": "21470642176",
Dec  1 05:05:51 np0005540825 stupefied_jemison[220782]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 05:05:51 np0005540825 stupefied_jemison[220782]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:05:51 np0005540825 stupefied_jemison[220782]:            "name": "ceph_lv0",
Dec  1 05:05:51 np0005540825 stupefied_jemison[220782]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:05:51 np0005540825 stupefied_jemison[220782]:            "tags": {
Dec  1 05:05:51 np0005540825 stupefied_jemison[220782]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:05:51 np0005540825 stupefied_jemison[220782]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:05:51 np0005540825 stupefied_jemison[220782]:                "ceph.cephx_lockbox_secret": "",
Dec  1 05:05:51 np0005540825 stupefied_jemison[220782]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 05:05:51 np0005540825 stupefied_jemison[220782]:                "ceph.cluster_name": "ceph",
Dec  1 05:05:51 np0005540825 stupefied_jemison[220782]:                "ceph.crush_device_class": "",
Dec  1 05:05:51 np0005540825 stupefied_jemison[220782]:                "ceph.encrypted": "0",
Dec  1 05:05:51 np0005540825 stupefied_jemison[220782]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 05:05:51 np0005540825 stupefied_jemison[220782]:                "ceph.osd_id": "1",
Dec  1 05:05:51 np0005540825 stupefied_jemison[220782]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 05:05:51 np0005540825 stupefied_jemison[220782]:                "ceph.type": "block",
Dec  1 05:05:51 np0005540825 stupefied_jemison[220782]:                "ceph.vdo": "0",
Dec  1 05:05:51 np0005540825 stupefied_jemison[220782]:                "ceph.with_tpm": "0"
Dec  1 05:05:51 np0005540825 stupefied_jemison[220782]:            },
Dec  1 05:05:51 np0005540825 stupefied_jemison[220782]:            "type": "block",
Dec  1 05:05:51 np0005540825 stupefied_jemison[220782]:            "vg_name": "ceph_vg0"
Dec  1 05:05:51 np0005540825 stupefied_jemison[220782]:        }
Dec  1 05:05:51 np0005540825 stupefied_jemison[220782]:    ]
Dec  1 05:05:51 np0005540825 stupefied_jemison[220782]: }
Dec  1 05:05:51 np0005540825 systemd[1]: libpod-60549e417043d9fdffeb301f7e9f1d84e2ae8676fd7cdc2d7e514282cfce8783.scope: Deactivated successfully.
Dec  1 05:05:51 np0005540825 podman[220740]: 2025-12-01 10:05:51.879005526 +0000 UTC m=+0.453579432 container died 60549e417043d9fdffeb301f7e9f1d84e2ae8676fd7cdc2d7e514282cfce8783 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:05:51 np0005540825 systemd[1]: var-lib-containers-storage-overlay-e00e05873eb567e8a40531b98bddde5c83bbd8805eb2ea7ab1ecd0012fe472ac-merged.mount: Deactivated successfully.
Dec  1 05:05:51 np0005540825 podman[220740]: 2025-12-01 10:05:51.93004387 +0000 UTC m=+0.504617766 container remove 60549e417043d9fdffeb301f7e9f1d84e2ae8676fd7cdc2d7e514282cfce8783 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_jemison, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  1 05:05:51 np0005540825 systemd[1]: libpod-conmon-60549e417043d9fdffeb301f7e9f1d84e2ae8676fd7cdc2d7e514282cfce8783.scope: Deactivated successfully.
Dec  1 05:05:52 np0005540825 podman[221100]: 2025-12-01 10:05:52.555619299 +0000 UTC m=+0.055703008 container create 863f1d1ca938c9d3b84b2216fdb1a526d68a31f71d58e2230865f282719a81a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:05:52 np0005540825 systemd[1]: Started libpod-conmon-863f1d1ca938c9d3b84b2216fdb1a526d68a31f71d58e2230865f282719a81a2.scope.
Dec  1 05:05:52 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:05:52 np0005540825 podman[221100]: 2025-12-01 10:05:52.525517226 +0000 UTC m=+0.025600935 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:05:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:05:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:05:52.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:05:52 np0005540825 podman[221100]: 2025-12-01 10:05:52.633739185 +0000 UTC m=+0.133822894 container init 863f1d1ca938c9d3b84b2216fdb1a526d68a31f71d58e2230865f282719a81a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  1 05:05:52 np0005540825 podman[221100]: 2025-12-01 10:05:52.642028024 +0000 UTC m=+0.142111713 container start 863f1d1ca938c9d3b84b2216fdb1a526d68a31f71d58e2230865f282719a81a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_hodgkin, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True)
Dec  1 05:05:52 np0005540825 podman[221100]: 2025-12-01 10:05:52.645589417 +0000 UTC m=+0.145673136 container attach 863f1d1ca938c9d3b84b2216fdb1a526d68a31f71d58e2230865f282719a81a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  1 05:05:52 np0005540825 vigilant_hodgkin[221116]: 167 167
Dec  1 05:05:52 np0005540825 systemd[1]: libpod-863f1d1ca938c9d3b84b2216fdb1a526d68a31f71d58e2230865f282719a81a2.scope: Deactivated successfully.
Dec  1 05:05:52 np0005540825 conmon[221116]: conmon 863f1d1ca938c9d3b84b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-863f1d1ca938c9d3b84b2216fdb1a526d68a31f71d58e2230865f282719a81a2.scope/container/memory.events
Dec  1 05:05:52 np0005540825 podman[221100]: 2025-12-01 10:05:52.647413965 +0000 UTC m=+0.147497654 container died 863f1d1ca938c9d3b84b2216fdb1a526d68a31f71d58e2230865f282719a81a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_hodgkin, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  1 05:05:52 np0005540825 systemd[1]: var-lib-containers-storage-overlay-ec387131e2b539f547de4f6694ce840a40f0acf3c41206eac2c39393fdf437b1-merged.mount: Deactivated successfully.
Dec  1 05:05:52 np0005540825 python3[221099]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  1 05:05:52 np0005540825 podman[221100]: 2025-12-01 10:05:52.679591993 +0000 UTC m=+0.179675682 container remove 863f1d1ca938c9d3b84b2216fdb1a526d68a31f71d58e2230865f282719a81a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_hodgkin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  1 05:05:52 np0005540825 systemd[1]: libpod-conmon-863f1d1ca938c9d3b84b2216fdb1a526d68a31f71d58e2230865f282719a81a2.scope: Deactivated successfully.
Dec  1 05:05:52 np0005540825 podman[221165]: 2025-12-01 10:05:52.881243011 +0000 UTC m=+0.075043816 container create d7fbf98a0cd23d4ed1649e62a06320a960f0ee4c5467c1385c594e10121e5aea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:05:52 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v445: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 605 B/s rd, 121 B/s wr, 0 op/s
Dec  1 05:05:52 np0005540825 systemd[1]: Started libpod-conmon-d7fbf98a0cd23d4ed1649e62a06320a960f0ee4c5467c1385c594e10121e5aea.scope.
Dec  1 05:05:52 np0005540825 podman[221165]: 2025-12-01 10:05:52.851612871 +0000 UTC m=+0.045413746 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:05:52 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:05:52 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b38c0b53fc5e77e93eb609a2d54d72f969372269980843f26ba41084da973e24/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:05:52 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b38c0b53fc5e77e93eb609a2d54d72f969372269980843f26ba41084da973e24/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:05:52 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b38c0b53fc5e77e93eb609a2d54d72f969372269980843f26ba41084da973e24/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:05:52 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b38c0b53fc5e77e93eb609a2d54d72f969372269980843f26ba41084da973e24/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:05:52 np0005540825 podman[221165]: 2025-12-01 10:05:52.98945752 +0000 UTC m=+0.183258415 container init d7fbf98a0cd23d4ed1649e62a06320a960f0ee4c5467c1385c594e10121e5aea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_volhard, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  1 05:05:53 np0005540825 podman[221165]: 2025-12-01 10:05:53.001868067 +0000 UTC m=+0.195668902 container start d7fbf98a0cd23d4ed1649e62a06320a960f0ee4c5467c1385c594e10121e5aea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  1 05:05:53 np0005540825 podman[221165]: 2025-12-01 10:05:53.008151373 +0000 UTC m=+0.201952218 container attach d7fbf98a0cd23d4ed1649e62a06320a960f0ee4c5467c1385c594e10121e5aea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_volhard, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  1 05:05:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:05:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:05:53.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:05:53 np0005540825 python3.9[221346]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:05:53 np0005540825 lvm[221387]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 05:05:53 np0005540825 lvm[221387]: VG ceph_vg0 finished
Dec  1 05:05:53 np0005540825 dreamy_volhard[221204]: {}
Dec  1 05:05:53 np0005540825 systemd[1]: libpod-d7fbf98a0cd23d4ed1649e62a06320a960f0ee4c5467c1385c594e10121e5aea.scope: Deactivated successfully.
Dec  1 05:05:53 np0005540825 podman[221165]: 2025-12-01 10:05:53.716828509 +0000 UTC m=+0.910629304 container died d7fbf98a0cd23d4ed1649e62a06320a960f0ee4c5467c1385c594e10121e5aea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:05:53 np0005540825 systemd[1]: libpod-d7fbf98a0cd23d4ed1649e62a06320a960f0ee4c5467c1385c594e10121e5aea.scope: Consumed 1.156s CPU time.
Dec  1 05:05:53 np0005540825 systemd[1]: var-lib-containers-storage-overlay-b38c0b53fc5e77e93eb609a2d54d72f969372269980843f26ba41084da973e24-merged.mount: Deactivated successfully.
Dec  1 05:05:53 np0005540825 podman[221165]: 2025-12-01 10:05:53.760017776 +0000 UTC m=+0.953818571 container remove d7fbf98a0cd23d4ed1649e62a06320a960f0ee4c5467c1385c594e10121e5aea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_volhard, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:05:53 np0005540825 systemd[1]: libpod-conmon-d7fbf98a0cd23d4ed1649e62a06320a960f0ee4c5467c1385c594e10121e5aea.scope: Deactivated successfully.
Dec  1 05:05:53 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 05:05:53 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:05:53 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 05:05:53 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:05:54 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:05:54 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:05:54 np0005540825 python3.9[221478]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:05:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:05:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:05:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:05:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:05:54.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:05:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:05:54 np0005540825 python3.9[221655]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:05:54 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v446: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 605 B/s rd, 121 B/s wr, 0 op/s
Dec  1 05:05:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:05:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:05:55.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:05:55 np0005540825 python3.9[221734]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:05:56 np0005540825 python3.9[221887]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:05:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:05:56 : epoch 692d687e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:05:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:05:56 : epoch 692d687e : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:05:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:05:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:05:56.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:05:56 np0005540825 python3.9[221965]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:05:56 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v447: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 847 B/s wr, 2 op/s
Dec  1 05:05:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:05:57.095Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:05:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:05:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:05:57.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:05:57 np0005540825 python3.9[222119]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:05:58 np0005540825 python3.9[222197]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:05:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:05:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:05:58.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:05:58 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v448: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 716 B/s wr, 2 op/s
Dec  1 05:05:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:05:58.920Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:05:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:05:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:05:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:05:59.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:05:59 np0005540825 python3.9[222350]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:05:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:05:59 : epoch 692d687e : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:05:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:05:59 : epoch 692d687e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:05:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:05:59 : epoch 692d687e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:05:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:05:59 : epoch 692d687e : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:05:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:05:59 np0005540825 python3.9[222476]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764583558.6630626-3899-203618456430520/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:06:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:06:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:06:00.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:06:00 np0005540825 python3.9[222653]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:06:00 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v449: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Dec  1 05:06:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:06:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:06:01.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:06:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:06:01] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Dec  1 05:06:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:06:01] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Dec  1 05:06:01 np0005540825 python3.9[222807]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 05:06:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:02 : epoch 692d687e : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  1 05:06:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:02 : epoch 692d687e : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec  1 05:06:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:02 : epoch 692d687e : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec  1 05:06:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:02 : epoch 692d687e : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec  1 05:06:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:02 : epoch 692d687e : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec  1 05:06:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:02 : epoch 692d687e : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec  1 05:06:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:02 : epoch 692d687e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec  1 05:06:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:02 : epoch 692d687e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  1 05:06:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:02 : epoch 692d687e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  1 05:06:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:02 : epoch 692d687e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  1 05:06:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:02 : epoch 692d687e : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec  1 05:06:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:02 : epoch 692d687e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  1 05:06:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:02 : epoch 692d687e : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec  1 05:06:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:02 : epoch 692d687e : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec  1 05:06:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:02 : epoch 692d687e : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec  1 05:06:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:02 : epoch 692d687e : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec  1 05:06:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:02 : epoch 692d687e : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec  1 05:06:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:02 : epoch 692d687e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec  1 05:06:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:02 : epoch 692d687e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec  1 05:06:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:02 : epoch 692d687e : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec  1 05:06:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:02 : epoch 692d687e : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec  1 05:06:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:02 : epoch 692d687e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec  1 05:06:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:02 : epoch 692d687e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec  1 05:06:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:02 : epoch 692d687e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec  1 05:06:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:02 : epoch 692d687e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  1 05:06:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:02 : epoch 692d687e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec  1 05:06:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:02 : epoch 692d687e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  1 05:06:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:06:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:06:02.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:06:02 np0005540825 python3.9[222973]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:06:02 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v450: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  1 05:06:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:06:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:06:03.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:06:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:03 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7d8000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:03 np0005540825 python3.9[223131]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 05:06:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:03 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7cc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:04 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7b4000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:06:04.557 163291 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:06:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:06:04.558 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:06:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:06:04.558 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:06:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:06:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:06:04.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:06:04 np0005540825 python3.9[223284]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 05:06:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:06:04 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v451: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  1 05:06:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:06:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:06:05.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:06:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:05 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7ac000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:05 np0005540825 python3.9[223439]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 05:06:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/100605 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  1 05:06:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:05 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7b8000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/100606 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  1 05:06:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:06 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7cc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:06 np0005540825 python3.9[223595]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:06:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:06:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:06:06.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:06:06 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v452: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1023 B/s wr, 4 op/s
Dec  1 05:06:07 np0005540825 python3.9[223747]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:06:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:06:07.097Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:06:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:06:07.097Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:06:07 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:07 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:06:07 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:06:07.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:06:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:07 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7b40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:07 np0005540825 python3.9[223872]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764583566.5187507-4115-27791126002762/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:06:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:07 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7ac0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:08 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7b8001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:08 np0005540825 podman[223995]: 2025-12-01 10:06:08.216300356 +0000 UTC m=+0.070173799 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  1 05:06:08 np0005540825 python3.9[224044]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:06:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:06:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:06:08.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:06:08 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v453: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 511 B/s wr, 2 op/s
Dec  1 05:06:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:06:08.921Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:06:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:06:08.921Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:06:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:06:08.922Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:06:09 np0005540825 python3.9[224167]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764583567.9072938-4160-97352465813884/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:06:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:06:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:06:09.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:06:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:09 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7cc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:06:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:06:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:06:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:06:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:06:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:06:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:06:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:06:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:06:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:09 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7b40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:10 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7ac0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:10 np0005540825 python3.9[224321]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:06:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:06:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:06:10.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:06:10 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v454: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 511 B/s wr, 2 op/s
Dec  1 05:06:11 np0005540825 python3.9[224444]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764583569.9943619-4205-216253673390047/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:06:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:06:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:06:11.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:06:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:11 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7b8001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/100611 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 05:06:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:06:11] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Dec  1 05:06:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:06:11] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Dec  1 05:06:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:11 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7cc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:12 np0005540825 python3.9[224598]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 05:06:12 np0005540825 systemd[1]: Reloading.
Dec  1 05:06:12 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 05:06:12 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 05:06:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:12 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7b40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:12 np0005540825 systemd[1]: Reached target edpm_libvirt.target.
Dec  1 05:06:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:06:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:06:12.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:06:12 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v455: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Dec  1 05:06:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:06:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:06:13.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:06:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:13 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7ac0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:13 np0005540825 python3.9[224789]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec  1 05:06:13 np0005540825 systemd[1]: Reloading.
Dec  1 05:06:13 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 05:06:13 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 05:06:13 np0005540825 systemd[1]: Reloading.
Dec  1 05:06:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:13 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7b8001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:13 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 05:06:13 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 05:06:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:14 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7cc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:14 np0005540825 systemd[1]: session-53.scope: Deactivated successfully.
Dec  1 05:06:14 np0005540825 systemd[1]: session-53.scope: Consumed 3min 50.409s CPU time.
Dec  1 05:06:14 np0005540825 systemd-logind[789]: Session 53 logged out. Waiting for processes to exit.
Dec  1 05:06:14 np0005540825 systemd-logind[789]: Removed session 53.
Dec  1 05:06:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:06:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:06:14.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:06:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:06:14 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v456: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Dec  1 05:06:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:06:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:06:15.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:06:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:15 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7b4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:15 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7ac002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:16 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7b8002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:06:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:06:16.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:06:16 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v457: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec  1 05:06:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:06:17.097Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:06:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:06:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:06:17.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:06:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:17 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7cc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:17 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7b4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:18 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7ac002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:06:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:06:18.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:06:18 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v458: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  1 05:06:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:06:18.923Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:06:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:06:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:06:19.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:06:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:19 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7b8002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:19 : epoch 692d687e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:06:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:06:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:19 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7cc0038d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:20 np0005540825 systemd-logind[789]: New session 54 of user zuul.
Dec  1 05:06:20 np0005540825 systemd[1]: Started Session 54 of User zuul.
Dec  1 05:06:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:20 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7b4002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:06:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:06:20.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:06:20 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v459: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Dec  1 05:06:21 np0005540825 python3.9[225072]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 05:06:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:06:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:06:21.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:06:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:21 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7ac002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:06:21] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Dec  1 05:06:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:06:21] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Dec  1 05:06:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:21 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7b8002f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:22 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:22 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7cc0038d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:22 np0005540825 podman[225155]: 2025-12-01 10:06:22.322559992 +0000 UTC m=+0.166317579 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller)
Dec  1 05:06:22 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:22 : epoch 692d687e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:06:22 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:22 : epoch 692d687e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:06:22 np0005540825 python3.9[225254]: ansible-ansible.builtin.service_facts Invoked
Dec  1 05:06:22 np0005540825 network[225271]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  1 05:06:22 np0005540825 network[225272]: 'network-scripts' will be removed from distribution in near future.
Dec  1 05:06:22 np0005540825 network[225273]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  1 05:06:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:06:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:06:22.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:06:22 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v460: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Dec  1 05:06:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:06:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:06:23.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:06:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:23 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7b4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:23 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7ac003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:24 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7b8004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:06:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:06:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:06:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:06:24.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:06:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:06:24 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v461: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Dec  1 05:06:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:06:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:06:25.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:06:25 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:25 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7cc0038d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:25 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:25 : epoch 692d687e : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  1 05:06:25 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:25 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7cc0038d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:26 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:26 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7ac003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:06:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:06:26.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:06:26 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v462: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  1 05:06:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:06:27.098Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:06:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:06:27.098Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:06:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:06:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:06:27.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:06:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:27 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7b8004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:27 np0005540825 python3.9[225551]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 05:06:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:27 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7b8004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:28 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7cc0038d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:06:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:06:28.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:06:28 np0005540825 python3.9[225635]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 05:06:28 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v463: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  1 05:06:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:06:28.925Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:06:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:06:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:06:29.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:06:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:29 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7ac003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:06:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:29 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7b8004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:30 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:30 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7b8004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:06:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:06:30.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:06:30 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v464: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  1 05:06:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:06:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:06:31.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:06:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:31 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7cc0038d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/100631 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  1 05:06:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:06:31] "GET /metrics HTTP/1.1" 200 48430 "" "Prometheus/2.51.0"
Dec  1 05:06:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:06:31] "GET /metrics HTTP/1.1" 200 48430 "" "Prometheus/2.51.0"
Dec  1 05:06:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:31 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7ac003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:32 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7b8004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:06:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:06:32.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:06:32 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v465: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec  1 05:06:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:06:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:06:33.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:06:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:33 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7b8004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:33 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7cc0038f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:34 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7ac003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:06:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:06:34.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:06:34 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:06:34 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v466: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec  1 05:06:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:06:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:06:35.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:06:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:35 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7b8004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:35 np0005540825 python3.9[225797]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 05:06:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:35 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7b8004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:36 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7a8000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:36 np0005540825 python3.9[225950]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 05:06:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:06:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:06:36.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:06:36 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v467: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec  1 05:06:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:06:37.100Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:06:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:06:37.101Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:06:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:06:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:06:37.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:06:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:37 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7b8004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:37 np0005540825 python3.9[226105]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 05:06:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:37 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7cc003930 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:38 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7ac003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:38 np0005540825 python3.9[226257]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 05:06:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:06:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:06:38.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:06:38 np0005540825 podman[226382]: 2025-12-01 10:06:38.838879726 +0000 UTC m=+0.067800896 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 05:06:38 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v468: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec  1 05:06:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:06:38.926Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:06:39 np0005540825 python3.9[226429]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:06:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:06:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:06:39.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:06:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:39 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7a80016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_10:06:39
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', 'vms', 'images', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', '.nfs', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'backups']
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 05:06:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:06:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:06:39 np0005540825 python3.9[226555]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764583598.4539661-245-147316195178963/.source.iscsi _original_basename=._2d49w_7 follow=False checksum=8476c13e353fe0725f195e9ce0ffbd8df891dad5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:06:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:06:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:06:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:39 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7b8004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:40 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:40 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7cc003950 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:40 np0005540825 python3.9[226732]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:06:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:06:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:06:40.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:06:40 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v469: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec  1 05:06:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:06:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:06:41.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:06:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:41 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7ac003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:06:41] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Dec  1 05:06:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:06:41] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Dec  1 05:06:41 np0005540825 python3.9[226885]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:06:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:41 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7a80016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:42 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:42 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7b8004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:06:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:06:42.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:06:42 np0005540825 python3.9[227038]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 05:06:42 np0005540825 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Dec  1 05:06:42 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v470: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:06:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:06:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:06:43.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:06:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:43 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7cc003970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:43 np0005540825 python3.9[227196]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 05:06:43 np0005540825 systemd[1]: Reloading.
Dec  1 05:06:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:43 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7ac003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:43 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 05:06:43 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 05:06:44 np0005540825 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec  1 05:06:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:44 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7a80016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:44 np0005540825 systemd[1]: Starting Open-iSCSI...
Dec  1 05:06:44 np0005540825 kernel: Loading iSCSI transport class v2.0-870.
Dec  1 05:06:44 np0005540825 systemd[1]: Started Open-iSCSI.
Dec  1 05:06:44 np0005540825 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Dec  1 05:06:44 np0005540825 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Dec  1 05:06:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:06:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:06:44.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:06:44 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:06:44 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v471: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:06:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:06:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:06:45.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:06:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:45 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7b8004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:45 np0005540825 python3.9[227397]: ansible-ansible.builtin.service_facts Invoked
Dec  1 05:06:45 np0005540825 network[227416]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  1 05:06:45 np0005540825 network[227417]: 'network-scripts' will be removed from distribution in near future.
Dec  1 05:06:45 np0005540825 network[227418]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  1 05:06:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:45 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7cc003990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:46 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7ac003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:06:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:06:46.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:06:46 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v472: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  1 05:06:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:06:47.101Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:06:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:06:47.102Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:06:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:06:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:06:47.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:06:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:47 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7a8002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:47 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7b8004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:48 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7cc0039b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:06:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:06:48.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:06:48 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v473: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:06:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:06:48.927Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:06:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:06:48.927Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:06:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:06:48.928Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:06:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:06:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:06:49.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:06:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:49 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7a8002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:49 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:06:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:49 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7a8002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:50 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:50 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7b8004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:06:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:06:50.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:06:50 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v474: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  1 05:06:50 np0005540825 python3.9[227694]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  1 05:06:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:06:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:06:51.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:06:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:51 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7cc0039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:06:51] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Dec  1 05:06:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:06:51] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Dec  1 05:06:51 np0005540825 python3.9[227848]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Dec  1 05:06:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:51 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7cc0039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:52 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7cc0039d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:52 np0005540825 podman[228005]: 2025-12-01 10:06:52.542179513 +0000 UTC m=+0.122830955 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller)
Dec  1 05:06:52 np0005540825 python3.9[228006]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:06:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:06:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:06:52.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:06:52 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v475: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:06:52 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  1 05:06:52 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 4016 writes, 18K keys, 4016 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.03 MB/s#012Cumulative WAL: 4016 writes, 4016 syncs, 1.00 writes per sync, written: 0.03 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1485 writes, 6013 keys, 1485 commit groups, 1.0 writes per commit group, ingest: 11.17 MB, 0.02 MB/s#012Interval WAL: 1485 writes, 1485 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     37.9      0.71              0.09         8    0.089       0      0       0.0       0.0#012  L6      1/0   11.53 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3     52.9     45.2      1.98              0.29         7    0.283     33K   3673       0.0       0.0#012 Sum      1/0   11.53 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3     39.0     43.2      2.69              0.38        15    0.180     33K   3673       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   6.5     22.1     21.3      2.09              0.16         6    0.348     16K   1857       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     52.9     45.2      1.98              0.29         7    0.283     33K   3673       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     38.0      0.71              0.09         7    0.101       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.026, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.11 GB write, 0.10 MB/s write, 0.10 GB read, 0.09 MB/s read, 2.7 seconds#012Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.05 GB read, 0.08 MB/s read, 2.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563970129350#2 capacity: 304.00 MB usage: 4.65 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 8.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(267,4.36 MB,1.43544%) FilterBlock(16,102.05 KB,0.0327813%) IndexBlock(16,190.95 KB,0.0613413%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  1 05:06:53 np0005540825 python3.9[228155]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764583612.086862-476-232297890277363/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:06:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:06:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:06:53.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:06:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:53 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7d0001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:53 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7ac003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:54 np0005540825 python3.9[228309]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:06:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:54 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7ac003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:06:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:06:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:06:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:06:54.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:06:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:06:54 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v476: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:06:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:06:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:06:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 05:06:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:06:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 05:06:54 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v477: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 305 B/s rd, 0 op/s
Dec  1 05:06:55 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:06:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 05:06:55 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:06:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 05:06:55 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 05:06:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 05:06:55 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:06:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:06:55 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:06:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:06:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:06:55.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:06:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:55 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7cc0039f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:55 np0005540825 python3.9[228543]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 05:06:55 np0005540825 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec  1 05:06:55 np0005540825 systemd[1]: Stopped Load Kernel Modules.
Dec  1 05:06:55 np0005540825 systemd[1]: Stopping Load Kernel Modules...
Dec  1 05:06:55 np0005540825 systemd[1]: Starting Load Kernel Modules...
Dec  1 05:06:55 np0005540825 systemd[1]: Finished Load Kernel Modules.
Dec  1 05:06:55 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:06:55 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:06:55 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:06:55 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:06:55 np0005540825 podman[228683]: 2025-12-01 10:06:55.636200647 +0000 UTC m=+0.044539103 container create 446dc71ab3a3c35ae7023dee7fdd72be111b866ec6700ab30be1a5dd05dca8da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  1 05:06:55 np0005540825 systemd[1]: Started libpod-conmon-446dc71ab3a3c35ae7023dee7fdd72be111b866ec6700ab30be1a5dd05dca8da.scope.
Dec  1 05:06:55 np0005540825 podman[228683]: 2025-12-01 10:06:55.617927536 +0000 UTC m=+0.026266012 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:06:55 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:06:55 np0005540825 podman[228683]: 2025-12-01 10:06:55.748626217 +0000 UTC m=+0.156964743 container init 446dc71ab3a3c35ae7023dee7fdd72be111b866ec6700ab30be1a5dd05dca8da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec  1 05:06:55 np0005540825 podman[228683]: 2025-12-01 10:06:55.757042539 +0000 UTC m=+0.165380985 container start 446dc71ab3a3c35ae7023dee7fdd72be111b866ec6700ab30be1a5dd05dca8da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:06:55 np0005540825 podman[228683]: 2025-12-01 10:06:55.760691055 +0000 UTC m=+0.169029581 container attach 446dc71ab3a3c35ae7023dee7fdd72be111b866ec6700ab30be1a5dd05dca8da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_margulis, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec  1 05:06:55 np0005540825 admiring_margulis[228739]: 167 167
Dec  1 05:06:55 np0005540825 systemd[1]: libpod-446dc71ab3a3c35ae7023dee7fdd72be111b866ec6700ab30be1a5dd05dca8da.scope: Deactivated successfully.
Dec  1 05:06:55 np0005540825 podman[228683]: 2025-12-01 10:06:55.765141192 +0000 UTC m=+0.173479638 container died 446dc71ab3a3c35ae7023dee7fdd72be111b866ec6700ab30be1a5dd05dca8da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_margulis, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:06:55 np0005540825 systemd[1]: var-lib-containers-storage-overlay-84db912a5acd98c602ab2a33385edefcda4436dad2ed6b94c902d80215e49bab-merged.mount: Deactivated successfully.
Dec  1 05:06:55 np0005540825 podman[228683]: 2025-12-01 10:06:55.801608762 +0000 UTC m=+0.209947228 container remove 446dc71ab3a3c35ae7023dee7fdd72be111b866ec6700ab30be1a5dd05dca8da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_margulis, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  1 05:06:55 np0005540825 systemd[1]: libpod-conmon-446dc71ab3a3c35ae7023dee7fdd72be111b866ec6700ab30be1a5dd05dca8da.scope: Deactivated successfully.
Dec  1 05:06:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:55 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7d0001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:55 np0005540825 podman[228832]: 2025-12-01 10:06:55.970713643 +0000 UTC m=+0.047330926 container create 026dd1b0b5ae6c3e3309cc362b429f4b5a3030db48ab096c11d5e1aa5c036441 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_kapitsa, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:06:56 np0005540825 systemd[1]: Started libpod-conmon-026dd1b0b5ae6c3e3309cc362b429f4b5a3030db48ab096c11d5e1aa5c036441.scope.
Dec  1 05:06:56 np0005540825 podman[228832]: 2025-12-01 10:06:55.950780509 +0000 UTC m=+0.027397812 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:06:56 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:06:56 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/995db6a214e2645515b88c4c6d626bf87aef461a345e743ab72741a708d19598/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:06:56 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/995db6a214e2645515b88c4c6d626bf87aef461a345e743ab72741a708d19598/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:06:56 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/995db6a214e2645515b88c4c6d626bf87aef461a345e743ab72741a708d19598/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:06:56 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/995db6a214e2645515b88c4c6d626bf87aef461a345e743ab72741a708d19598/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:06:56 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/995db6a214e2645515b88c4c6d626bf87aef461a345e743ab72741a708d19598/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:06:56 np0005540825 podman[228832]: 2025-12-01 10:06:56.071850706 +0000 UTC m=+0.148467999 container init 026dd1b0b5ae6c3e3309cc362b429f4b5a3030db48ab096c11d5e1aa5c036441 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_kapitsa, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  1 05:06:56 np0005540825 podman[228832]: 2025-12-01 10:06:56.086106112 +0000 UTC m=+0.162723385 container start 026dd1b0b5ae6c3e3309cc362b429f4b5a3030db48ab096c11d5e1aa5c036441 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  1 05:06:56 np0005540825 podman[228832]: 2025-12-01 10:06:56.089362207 +0000 UTC m=+0.165979490 container attach 026dd1b0b5ae6c3e3309cc362b429f4b5a3030db48ab096c11d5e1aa5c036441 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:06:56 np0005540825 python3.9[228828]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:06:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:56 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7ac003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:56 np0005540825 goofy_kapitsa[228847]: --> passed data devices: 0 physical, 1 LVM
Dec  1 05:06:56 np0005540825 goofy_kapitsa[228847]: --> All data devices are unavailable
Dec  1 05:06:56 np0005540825 systemd[1]: libpod-026dd1b0b5ae6c3e3309cc362b429f4b5a3030db48ab096c11d5e1aa5c036441.scope: Deactivated successfully.
Dec  1 05:06:56 np0005540825 podman[228832]: 2025-12-01 10:06:56.476975701 +0000 UTC m=+0.553592974 container died 026dd1b0b5ae6c3e3309cc362b429f4b5a3030db48ab096c11d5e1aa5c036441 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_kapitsa, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:06:56 np0005540825 systemd[1]: var-lib-containers-storage-overlay-995db6a214e2645515b88c4c6d626bf87aef461a345e743ab72741a708d19598-merged.mount: Deactivated successfully.
Dec  1 05:06:56 np0005540825 podman[228832]: 2025-12-01 10:06:56.527794899 +0000 UTC m=+0.604412192 container remove 026dd1b0b5ae6c3e3309cc362b429f4b5a3030db48ab096c11d5e1aa5c036441 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_kapitsa, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:06:56 np0005540825 systemd[1]: libpod-conmon-026dd1b0b5ae6c3e3309cc362b429f4b5a3030db48ab096c11d5e1aa5c036441.scope: Deactivated successfully.
Dec  1 05:06:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:06:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:06:56.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:06:56 np0005540825 python3.9[229078]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 05:06:56 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v478: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 305 B/s rd, 0 op/s
Dec  1 05:06:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:06:57.102Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:06:57 np0005540825 podman[229144]: 2025-12-01 10:06:57.13710011 +0000 UTC m=+0.061002087 container create add3bf071c7966af2419847c9851f3cfc5aa09acf39e5728d49fdff121c91561 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:06:57 np0005540825 systemd[1]: Started libpod-conmon-add3bf071c7966af2419847c9851f3cfc5aa09acf39e5728d49fdff121c91561.scope.
Dec  1 05:06:57 np0005540825 podman[229144]: 2025-12-01 10:06:57.109234926 +0000 UTC m=+0.033136983 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:06:57 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:06:57 np0005540825 podman[229144]: 2025-12-01 10:06:57.231025603 +0000 UTC m=+0.154927630 container init add3bf071c7966af2419847c9851f3cfc5aa09acf39e5728d49fdff121c91561 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_gates, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  1 05:06:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:06:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:06:57.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:06:57 np0005540825 podman[229144]: 2025-12-01 10:06:57.240530413 +0000 UTC m=+0.164432440 container start add3bf071c7966af2419847c9851f3cfc5aa09acf39e5728d49fdff121c91561 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_gates, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  1 05:06:57 np0005540825 brave_gates[229164]: 167 167
Dec  1 05:06:57 np0005540825 podman[229144]: 2025-12-01 10:06:57.245205056 +0000 UTC m=+0.169107033 container attach add3bf071c7966af2419847c9851f3cfc5aa09acf39e5728d49fdff121c91561 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 05:06:57 np0005540825 systemd[1]: libpod-add3bf071c7966af2419847c9851f3cfc5aa09acf39e5728d49fdff121c91561.scope: Deactivated successfully.
Dec  1 05:06:57 np0005540825 podman[229144]: 2025-12-01 10:06:57.245529314 +0000 UTC m=+0.169431291 container died add3bf071c7966af2419847c9851f3cfc5aa09acf39e5728d49fdff121c91561 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_gates, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:06:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:57 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7ac003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:57 np0005540825 systemd[1]: var-lib-containers-storage-overlay-b37efcf24ab32739f3248f0c1ee3e384158307f880c6996774eb8452e64a19dd-merged.mount: Deactivated successfully.
Dec  1 05:06:57 np0005540825 podman[229144]: 2025-12-01 10:06:57.287059888 +0000 UTC m=+0.210961865 container remove add3bf071c7966af2419847c9851f3cfc5aa09acf39e5728d49fdff121c91561 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_gates, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec  1 05:06:57 np0005540825 systemd[1]: libpod-conmon-add3bf071c7966af2419847c9851f3cfc5aa09acf39e5728d49fdff121c91561.scope: Deactivated successfully.
Dec  1 05:06:57 np0005540825 podman[229272]: 2025-12-01 10:06:57.47515996 +0000 UTC m=+0.049927096 container create 580f6dbe00fb1220b963bc1062891094a32bf2682916c7ff4c9e7bcc1b1b6109 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_kapitsa, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  1 05:06:57 np0005540825 systemd[1]: Started libpod-conmon-580f6dbe00fb1220b963bc1062891094a32bf2682916c7ff4c9e7bcc1b1b6109.scope.
Dec  1 05:06:57 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:06:57 np0005540825 podman[229272]: 2025-12-01 10:06:57.454114476 +0000 UTC m=+0.028881632 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:06:57 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0d33c828732534d66bec3a6d1a8947cccc8bfb35c320528d7f8f5b7215f63ea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:06:57 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0d33c828732534d66bec3a6d1a8947cccc8bfb35c320528d7f8f5b7215f63ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:06:57 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0d33c828732534d66bec3a6d1a8947cccc8bfb35c320528d7f8f5b7215f63ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:06:57 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0d33c828732534d66bec3a6d1a8947cccc8bfb35c320528d7f8f5b7215f63ea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:06:57 np0005540825 podman[229272]: 2025-12-01 10:06:57.573729785 +0000 UTC m=+0.148496951 container init 580f6dbe00fb1220b963bc1062891094a32bf2682916c7ff4c9e7bcc1b1b6109 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_kapitsa, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 05:06:57 np0005540825 podman[229272]: 2025-12-01 10:06:57.583369799 +0000 UTC m=+0.158136935 container start 580f6dbe00fb1220b963bc1062891094a32bf2682916c7ff4c9e7bcc1b1b6109 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_kapitsa, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  1 05:06:57 np0005540825 podman[229272]: 2025-12-01 10:06:57.588797902 +0000 UTC m=+0.163565018 container attach 580f6dbe00fb1220b963bc1062891094a32bf2682916c7ff4c9e7bcc1b1b6109 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:06:57 np0005540825 python3.9[229330]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 05:06:57 np0005540825 gracious_kapitsa[229328]: {
Dec  1 05:06:57 np0005540825 gracious_kapitsa[229328]:    "1": [
Dec  1 05:06:57 np0005540825 gracious_kapitsa[229328]:        {
Dec  1 05:06:57 np0005540825 gracious_kapitsa[229328]:            "devices": [
Dec  1 05:06:57 np0005540825 gracious_kapitsa[229328]:                "/dev/loop3"
Dec  1 05:06:57 np0005540825 gracious_kapitsa[229328]:            ],
Dec  1 05:06:57 np0005540825 gracious_kapitsa[229328]:            "lv_name": "ceph_lv0",
Dec  1 05:06:57 np0005540825 gracious_kapitsa[229328]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:06:57 np0005540825 gracious_kapitsa[229328]:            "lv_size": "21470642176",
Dec  1 05:06:57 np0005540825 gracious_kapitsa[229328]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 05:06:57 np0005540825 gracious_kapitsa[229328]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:06:57 np0005540825 gracious_kapitsa[229328]:            "name": "ceph_lv0",
Dec  1 05:06:57 np0005540825 gracious_kapitsa[229328]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:06:57 np0005540825 gracious_kapitsa[229328]:            "tags": {
Dec  1 05:06:57 np0005540825 gracious_kapitsa[229328]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:06:57 np0005540825 gracious_kapitsa[229328]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:06:57 np0005540825 gracious_kapitsa[229328]:                "ceph.cephx_lockbox_secret": "",
Dec  1 05:06:57 np0005540825 gracious_kapitsa[229328]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 05:06:57 np0005540825 gracious_kapitsa[229328]:                "ceph.cluster_name": "ceph",
Dec  1 05:06:57 np0005540825 gracious_kapitsa[229328]:                "ceph.crush_device_class": "",
Dec  1 05:06:57 np0005540825 gracious_kapitsa[229328]:                "ceph.encrypted": "0",
Dec  1 05:06:57 np0005540825 gracious_kapitsa[229328]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 05:06:57 np0005540825 gracious_kapitsa[229328]:                "ceph.osd_id": "1",
Dec  1 05:06:57 np0005540825 gracious_kapitsa[229328]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 05:06:57 np0005540825 gracious_kapitsa[229328]:                "ceph.type": "block",
Dec  1 05:06:57 np0005540825 gracious_kapitsa[229328]:                "ceph.vdo": "0",
Dec  1 05:06:57 np0005540825 gracious_kapitsa[229328]:                "ceph.with_tpm": "0"
Dec  1 05:06:57 np0005540825 gracious_kapitsa[229328]:            },
Dec  1 05:06:57 np0005540825 gracious_kapitsa[229328]:            "type": "block",
Dec  1 05:06:57 np0005540825 gracious_kapitsa[229328]:            "vg_name": "ceph_vg0"
Dec  1 05:06:57 np0005540825 gracious_kapitsa[229328]:        }
Dec  1 05:06:57 np0005540825 gracious_kapitsa[229328]:    ]
Dec  1 05:06:57 np0005540825 gracious_kapitsa[229328]: }
Dec  1 05:06:57 np0005540825 systemd[1]: libpod-580f6dbe00fb1220b963bc1062891094a32bf2682916c7ff4c9e7bcc1b1b6109.scope: Deactivated successfully.
Dec  1 05:06:57 np0005540825 podman[229272]: 2025-12-01 10:06:57.888647926 +0000 UTC m=+0.463415052 container died 580f6dbe00fb1220b963bc1062891094a32bf2682916c7ff4c9e7bcc1b1b6109 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:06:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:57 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7cc003a10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:57 np0005540825 systemd[1]: var-lib-containers-storage-overlay-b0d33c828732534d66bec3a6d1a8947cccc8bfb35c320528d7f8f5b7215f63ea-merged.mount: Deactivated successfully.
Dec  1 05:06:57 np0005540825 podman[229272]: 2025-12-01 10:06:57.929972323 +0000 UTC m=+0.504739480 container remove 580f6dbe00fb1220b963bc1062891094a32bf2682916c7ff4c9e7bcc1b1b6109 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec  1 05:06:57 np0005540825 systemd[1]: libpod-conmon-580f6dbe00fb1220b963bc1062891094a32bf2682916c7ff4c9e7bcc1b1b6109.scope: Deactivated successfully.
Dec  1 05:06:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:58 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7cc003a10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:58 np0005540825 python3.9[229549]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:06:58 np0005540825 podman[229615]: 2025-12-01 10:06:58.545796005 +0000 UTC m=+0.046683190 container create 834c323e73e9b46cbb935a29c9e38d517cfed51cc9bc1378059f11a482873b24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:06:58 np0005540825 systemd[1]: Started libpod-conmon-834c323e73e9b46cbb935a29c9e38d517cfed51cc9bc1378059f11a482873b24.scope.
Dec  1 05:06:58 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:06:58 np0005540825 podman[229615]: 2025-12-01 10:06:58.52356209 +0000 UTC m=+0.024449265 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:06:58 np0005540825 podman[229615]: 2025-12-01 10:06:58.625841532 +0000 UTC m=+0.126728727 container init 834c323e73e9b46cbb935a29c9e38d517cfed51cc9bc1378059f11a482873b24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_shtern, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  1 05:06:58 np0005540825 podman[229615]: 2025-12-01 10:06:58.636862772 +0000 UTC m=+0.137749937 container start 834c323e73e9b46cbb935a29c9e38d517cfed51cc9bc1378059f11a482873b24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_shtern, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec  1 05:06:58 np0005540825 podman[229615]: 2025-12-01 10:06:58.640245991 +0000 UTC m=+0.141133156 container attach 834c323e73e9b46cbb935a29c9e38d517cfed51cc9bc1378059f11a482873b24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_shtern, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec  1 05:06:58 np0005540825 elated_shtern[229679]: 167 167
Dec  1 05:06:58 np0005540825 systemd[1]: libpod-834c323e73e9b46cbb935a29c9e38d517cfed51cc9bc1378059f11a482873b24.scope: Deactivated successfully.
Dec  1 05:06:58 np0005540825 conmon[229679]: conmon 834c323e73e9b46cbb93 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-834c323e73e9b46cbb935a29c9e38d517cfed51cc9bc1378059f11a482873b24.scope/container/memory.events
Dec  1 05:06:58 np0005540825 podman[229615]: 2025-12-01 10:06:58.646546577 +0000 UTC m=+0.147433742 container died 834c323e73e9b46cbb935a29c9e38d517cfed51cc9bc1378059f11a482873b24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:06:58 np0005540825 systemd[1]: var-lib-containers-storage-overlay-cf01f6061255acb2aec98849ec2040bcb742bb19c7118561018990ca53d8628c-merged.mount: Deactivated successfully.
Dec  1 05:06:58 np0005540825 podman[229615]: 2025-12-01 10:06:58.692298462 +0000 UTC m=+0.193185637 container remove 834c323e73e9b46cbb935a29c9e38d517cfed51cc9bc1378059f11a482873b24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_shtern, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  1 05:06:58 np0005540825 systemd[1]: libpod-conmon-834c323e73e9b46cbb935a29c9e38d517cfed51cc9bc1378059f11a482873b24.scope: Deactivated successfully.
Dec  1 05:06:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:06:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:06:58.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:06:58 np0005540825 podman[229756]: 2025-12-01 10:06:58.926058216 +0000 UTC m=+0.072982602 container create a820db1aa877c43abd5342b1d9d56839b4f9e0cae1fab3c0f4ebf31256896cb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:06:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:06:58.929Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:06:58 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v479: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 305 B/s rd, 0 op/s
Dec  1 05:06:58 np0005540825 podman[229756]: 2025-12-01 10:06:58.901705895 +0000 UTC m=+0.048630261 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:06:58 np0005540825 systemd[1]: Started libpod-conmon-a820db1aa877c43abd5342b1d9d56839b4f9e0cae1fab3c0f4ebf31256896cb4.scope.
Dec  1 05:06:59 np0005540825 python3.9[229750]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764583617.9269984-650-255630983312740/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:06:59 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:06:59 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f400576db798c7fbcfe3a7b60a18211edd90414fbd85f49bb091fb657b0a4cb2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:06:59 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f400576db798c7fbcfe3a7b60a18211edd90414fbd85f49bb091fb657b0a4cb2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:06:59 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f400576db798c7fbcfe3a7b60a18211edd90414fbd85f49bb091fb657b0a4cb2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:06:59 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f400576db798c7fbcfe3a7b60a18211edd90414fbd85f49bb091fb657b0a4cb2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:06:59 np0005540825 podman[229756]: 2025-12-01 10:06:59.05992452 +0000 UTC m=+0.206848966 container init a820db1aa877c43abd5342b1d9d56839b4f9e0cae1fab3c0f4ebf31256896cb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_shtern, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:06:59 np0005540825 podman[229756]: 2025-12-01 10:06:59.071200747 +0000 UTC m=+0.218125123 container start a820db1aa877c43abd5342b1d9d56839b4f9e0cae1fab3c0f4ebf31256896cb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_shtern, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  1 05:06:59 np0005540825 podman[229756]: 2025-12-01 10:06:59.075150321 +0000 UTC m=+0.222074707 container attach a820db1aa877c43abd5342b1d9d56839b4f9e0cae1fab3c0f4ebf31256896cb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_shtern, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  1 05:06:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:06:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:06:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:06:59.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:06:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:59 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7ac003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:59 np0005540825 python3.9[229967]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 05:06:59 np0005540825 lvm[230002]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 05:06:59 np0005540825 lvm[230002]: VG ceph_vg0 finished
Dec  1 05:06:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:06:59 np0005540825 thirsty_shtern[229773]: {}
Dec  1 05:06:59 np0005540825 systemd[1]: libpod-a820db1aa877c43abd5342b1d9d56839b4f9e0cae1fab3c0f4ebf31256896cb4.scope: Deactivated successfully.
Dec  1 05:06:59 np0005540825 systemd[1]: libpod-a820db1aa877c43abd5342b1d9d56839b4f9e0cae1fab3c0f4ebf31256896cb4.scope: Consumed 1.352s CPU time.
Dec  1 05:06:59 np0005540825 podman[229756]: 2025-12-01 10:06:59.887245491 +0000 UTC m=+1.034169847 container died a820db1aa877c43abd5342b1d9d56839b4f9e0cae1fab3c0f4ebf31256896cb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_shtern, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec  1 05:06:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:06:59 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7b8004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:06:59 np0005540825 systemd[1]: var-lib-containers-storage-overlay-f400576db798c7fbcfe3a7b60a18211edd90414fbd85f49bb091fb657b0a4cb2-merged.mount: Deactivated successfully.
Dec  1 05:06:59 np0005540825 podman[229756]: 2025-12-01 10:06:59.941342685 +0000 UTC m=+1.088267051 container remove a820db1aa877c43abd5342b1d9d56839b4f9e0cae1fab3c0f4ebf31256896cb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  1 05:06:59 np0005540825 systemd[1]: libpod-conmon-a820db1aa877c43abd5342b1d9d56839b4f9e0cae1fab3c0f4ebf31256896cb4.scope: Deactivated successfully.
Dec  1 05:06:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 05:07:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:07:00 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7d00027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:00 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:07:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 05:07:00 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:07:00 np0005540825 python3.9[230169]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:07:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:07:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:07:00.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:07:00 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v480: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 305 B/s rd, 0 op/s
Dec  1 05:07:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:07:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:07:01.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:07:01 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:07:01 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:07:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:07:01 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7cc003a30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:01 np0005540825 python3.9[230372]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:07:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:07:01] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Dec  1 05:07:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:07:01] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Dec  1 05:07:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:07:01 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7cc003a30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:02 np0005540825 python3.9[230525]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:07:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:07:02 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7b8004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:07:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:07:02.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:07:02 np0005540825 python3.9[230677]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:07:02 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v481: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 305 B/s rd, 0 op/s
Dec  1 05:07:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:07:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:07:03.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:07:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:07:03 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7d00030f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:03 np0005540825 python3.9[230831]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:07:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:07:03 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7cc003a30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:07:04 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7cc003a30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:04 np0005540825 python3.9[230983]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:07:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:07:04.558 163291 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:07:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:07:04.559 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:07:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:07:04.559 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:07:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:07:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:07:04.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:07:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:07:04 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v482: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 305 B/s rd, 0 op/s
Dec  1 05:07:05 np0005540825 python3.9[231135]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:07:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:07:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:07:05.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:07:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:07:05 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7b8004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:07:05 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7d00030f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:05 np0005540825 python3.9[231289]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 05:07:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:07:06 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7d00030f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:06 np0005540825 python3.9[231443]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:07:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:07:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:07:06.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:07:06 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v483: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  1 05:07:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:07:07.104Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:07:07 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:07 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:07:07 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:07:07.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:07:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:07:07 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7d00030f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:07 np0005540825 python3.9[231597]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:07:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:07:07 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7b8004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:07:08 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7cc003a50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:08 np0005540825 python3.9[231749]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:07:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:07:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:07:08.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:07:08 np0005540825 python3.9[231827]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:07:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:07:08.931Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:07:08 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v484: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:07:09 np0005540825 podman[231926]: 2025-12-01 10:07:09.207996482 +0000 UTC m=+0.062431405 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec  1 05:07:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:07:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:07:09.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:07:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:07:09 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7ac003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:07:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:07:09 np0005540825 python3.9[232000]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:07:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:07:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:07:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:07:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:07:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:07:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:07:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:07:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:07:09 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7d00030f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:10 np0005540825 python3.9[232078]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:07:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:07:10 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7b8004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:07:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:07:10.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:07:10 np0005540825 python3.9[232230]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:07:10 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v485: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  1 05:07:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:07:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:07:11.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:07:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:07:11 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7cc003a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:07:11] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec  1 05:07:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:07:11] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec  1 05:07:11 np0005540825 python3.9[232384]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:07:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:07:11 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7ac003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:11 np0005540825 python3.9[232462]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:07:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:07:12 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7d0004830 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:12 np0005540825 python3.9[232614]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:07:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:07:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:07:12.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:07:12 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v486: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:07:13 np0005540825 python3.9[232692]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:07:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:07:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:07:13.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:07:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:07:13 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7b8004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:07:13 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7b8004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:14 np0005540825 python3.9[232846]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 05:07:14 np0005540825 systemd[1]: Reloading.
Dec  1 05:07:14 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 05:07:14 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 05:07:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:07:14 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7b8004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:07:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:07:14.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:07:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:07:14 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Dec  1 05:07:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:07:14.849860) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  1 05:07:14 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Dec  1 05:07:14 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764583634849900, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1053, "num_deletes": 256, "total_data_size": 1855924, "memory_usage": 1884432, "flush_reason": "Manual Compaction"}
Dec  1 05:07:14 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Dec  1 05:07:14 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764583634865725, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 1786618, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17743, "largest_seqno": 18795, "table_properties": {"data_size": 1781656, "index_size": 2486, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10397, "raw_average_key_size": 18, "raw_value_size": 1771576, "raw_average_value_size": 3157, "num_data_blocks": 111, "num_entries": 561, "num_filter_entries": 561, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764583543, "oldest_key_time": 1764583543, "file_creation_time": 1764583634, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Dec  1 05:07:14 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 15926 microseconds, and 8337 cpu microseconds.
Dec  1 05:07:14 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 05:07:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:07:14.865784) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 1786618 bytes OK
Dec  1 05:07:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:07:14.865808) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Dec  1 05:07:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:07:14.867512) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Dec  1 05:07:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:07:14.867534) EVENT_LOG_v1 {"time_micros": 1764583634867528, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  1 05:07:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:07:14.867556) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  1 05:07:14 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 1851105, prev total WAL file size 1851105, number of live WAL files 2.
Dec  1 05:07:14 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:07:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:07:14.868611) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323533' seq:0, type:0; will stop at (end)
Dec  1 05:07:14 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  1 05:07:14 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(1744KB)], [38(11MB)]
Dec  1 05:07:14 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764583634868699, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 13878051, "oldest_snapshot_seqno": -1}
Dec  1 05:07:14 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4904 keys, 13411077 bytes, temperature: kUnknown
Dec  1 05:07:14 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764583634982269, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 13411077, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13377088, "index_size": 20631, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12293, "raw_key_size": 125044, "raw_average_key_size": 25, "raw_value_size": 13286707, "raw_average_value_size": 2709, "num_data_blocks": 846, "num_entries": 4904, "num_filter_entries": 4904, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764582410, "oldest_key_time": 0, "file_creation_time": 1764583634, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Dec  1 05:07:14 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 05:07:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:07:14.982597) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 13411077 bytes
Dec  1 05:07:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:07:14.984189) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 122.1 rd, 118.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 11.5 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(15.3) write-amplify(7.5) OK, records in: 5430, records dropped: 526 output_compression: NoCompression
Dec  1 05:07:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:07:14.984220) EVENT_LOG_v1 {"time_micros": 1764583634984205, "job": 18, "event": "compaction_finished", "compaction_time_micros": 113679, "compaction_time_cpu_micros": 50052, "output_level": 6, "num_output_files": 1, "total_output_size": 13411077, "num_input_records": 5430, "num_output_records": 4904, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  1 05:07:14 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:07:14 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764583634984983, "job": 18, "event": "table_file_deletion", "file_number": 40}
Dec  1 05:07:14 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v487: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:07:14 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:07:14 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764583634989955, "job": 18, "event": "table_file_deletion", "file_number": 38}
Dec  1 05:07:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:07:14.868488) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:07:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:07:14.990000) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:07:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:07:14.990004) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:07:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:07:14.990005) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:07:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:07:14.990007) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:07:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:07:14.990008) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:07:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:07:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:07:15.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:07:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:07:15 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7ac003f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:15 np0005540825 python3.9[233037]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:07:15 np0005540825 python3.9[233116]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:07:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:07:15 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7a0000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:07:16 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7d0004830 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:16 np0005540825 python3.9[233268]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:07:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:07:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:07:16.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:07:16 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v488: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  1 05:07:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:07:17.105Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:07:17 np0005540825 python3.9[233346]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:07:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:07:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:07:17.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:07:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:07:17 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7b8004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:07:17 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7ac003fb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:18 np0005540825 python3.9[233500]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 05:07:18 np0005540825 systemd[1]: Reloading.
Dec  1 05:07:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[220334]: 01/12/2025 10:07:18 : epoch 692d687e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7a00016a0 fd 48 proxy ignored for local
Dec  1 05:07:18 np0005540825 kernel: ganesha.nfsd[227850]: segfault at 50 ip 00007fc881c3932e sp 00007fc843ffe210 error 4 in libntirpc.so.5.8[7fc881c1e000+2c000] likely on CPU 2 (core 0, socket 2)
Dec  1 05:07:18 np0005540825 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec  1 05:07:18 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 05:07:18 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 05:07:18 np0005540825 systemd[1]: Started Process Core Dump (PID 233502/UID 0).
Dec  1 05:07:18 np0005540825 systemd[1]: Starting Create netns directory...
Dec  1 05:07:18 np0005540825 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  1 05:07:18 np0005540825 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  1 05:07:18 np0005540825 systemd[1]: Finished Create netns directory.
Dec  1 05:07:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.002000053s ======
Dec  1 05:07:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:07:18.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec  1 05:07:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:07:18.931Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:07:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:07:18.932Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:07:18 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v489: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:07:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:07:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:07:19.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:07:19 np0005540825 python3.9[233696]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:07:19 np0005540825 systemd-coredump[233537]: Process 220359 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 57:#012#0  0x00007fc881c3932e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Dec  1 05:07:19 np0005540825 systemd[1]: systemd-coredump@4-233502-0.service: Deactivated successfully.
Dec  1 05:07:19 np0005540825 systemd[1]: systemd-coredump@4-233502-0.service: Consumed 1.214s CPU time.
Dec  1 05:07:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:07:19 np0005540825 podman[233725]: 2025-12-01 10:07:19.86648508 +0000 UTC m=+0.028984264 container died 10befa2b4a4711a0f07f1b41908bc8a32640288babf26b7dc6df679048c217dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  1 05:07:19 np0005540825 systemd[1]: var-lib-containers-storage-overlay-43d489ed16a71826fc6f7b1f8da8cee17c6a1ee81a630cf453eabda0edf732ce-merged.mount: Deactivated successfully.
Dec  1 05:07:19 np0005540825 podman[233725]: 2025-12-01 10:07:19.904715767 +0000 UTC m=+0.067214901 container remove 10befa2b4a4711a0f07f1b41908bc8a32640288babf26b7dc6df679048c217dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:07:19 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Main process exited, code=exited, status=139/n/a
Dec  1 05:07:20 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Failed with result 'exit-code'.
Dec  1 05:07:20 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Consumed 1.593s CPU time.
Dec  1 05:07:20 np0005540825 python3.9[233893]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:07:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:07:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:07:20.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:07:20 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v490: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  1 05:07:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:07:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:07:21.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:07:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:07:21] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec  1 05:07:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:07:21] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec  1 05:07:21 np0005540825 python3.9[234042]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764583640.17512-1271-9552017978745/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:07:22 np0005540825 python3.9[234195]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:07:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:07:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:07:22.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:07:22 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v491: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:07:23 np0005540825 podman[234319]: 2025-12-01 10:07:23.104247368 +0000 UTC m=+0.114664728 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  1 05:07:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:07:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:07:23.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:07:23 np0005540825 python3.9[234364]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:07:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/100724 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 05:07:24 np0005540825 python3.9[234498]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764583642.7049801-1346-243890732976822/.source.json _original_basename=.zpc2465l follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:07:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:07:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:07:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:07:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:07:24.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:07:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:07:24 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v492: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:07:25 np0005540825 python3.9[234650]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:07:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:07:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:07:25.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:07:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:07:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:07:26.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:07:26 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v493: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  1 05:07:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:07:27.107Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:07:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:07:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:07:27.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:07:27 np0005540825 python3.9[235080]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Dec  1 05:07:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:07:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:07:28.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:07:28 np0005540825 python3.9[235233]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  1 05:07:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:07:28.933Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:07:28 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v494: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  1 05:07:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:07:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:07:29.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:07:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:07:29 np0005540825 python3.9[235387]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  1 05:07:30 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Scheduled restart job, restart counter is at 5.
Dec  1 05:07:30 np0005540825 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.pytvsu for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 05:07:30 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Consumed 1.593s CPU time.
Dec  1 05:07:30 np0005540825 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.pytvsu for 365f19c2-81e5-5edd-b6b4-280555214d3a...
Dec  1 05:07:30 np0005540825 podman[235485]: 2025-12-01 10:07:30.422149922 +0000 UTC m=+0.064011636 container create ee2ae285e2bb503379717da814193004352bc37ffe396cf6c800880d666d92ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:07:30 np0005540825 podman[235485]: 2025-12-01 10:07:30.393378694 +0000 UTC m=+0.035240418 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:07:30 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12e4207408b4af6ddffe216286562d7248329f7dc28bdefd2ab2e953a2275aa8/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec  1 05:07:30 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12e4207408b4af6ddffe216286562d7248329f7dc28bdefd2ab2e953a2275aa8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:07:30 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12e4207408b4af6ddffe216286562d7248329f7dc28bdefd2ab2e953a2275aa8/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:07:30 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12e4207408b4af6ddffe216286562d7248329f7dc28bdefd2ab2e953a2275aa8/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.pytvsu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:07:30 np0005540825 podman[235485]: 2025-12-01 10:07:30.507880299 +0000 UTC m=+0.149742053 container init ee2ae285e2bb503379717da814193004352bc37ffe396cf6c800880d666d92ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:07:30 np0005540825 podman[235485]: 2025-12-01 10:07:30.524726042 +0000 UTC m=+0.166587746 container start ee2ae285e2bb503379717da814193004352bc37ffe396cf6c800880d666d92ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:07:30 np0005540825 bash[235485]: ee2ae285e2bb503379717da814193004352bc37ffe396cf6c800880d666d92ff
Dec  1 05:07:30 np0005540825 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.pytvsu for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 05:07:30 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:30 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec  1 05:07:30 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:30 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec  1 05:07:30 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:30 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec  1 05:07:30 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:30 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec  1 05:07:30 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:30 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec  1 05:07:30 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:30 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec  1 05:07:30 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:30 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec  1 05:07:30 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:30 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:07:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:07:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:07:30.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:07:30 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v495: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec  1 05:07:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:07:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:07:31.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:07:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:07:31] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec  1 05:07:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:07:31] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec  1 05:07:31 np0005540825 systemd[1]: virtnodedevd.service: Deactivated successfully.
Dec  1 05:07:32 np0005540825 python3[235674]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  1 05:07:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:07:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:07:32.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:07:32 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v496: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec  1 05:07:33 np0005540825 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  1 05:07:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:07:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:07:33.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:07:33 np0005540825 podman[235688]: 2025-12-01 10:07:33.966580975 +0000 UTC m=+1.594177990 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec  1 05:07:34 np0005540825 podman[235748]: 2025-12-01 10:07:34.172393843 +0000 UTC m=+0.056379005 container create 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd)
Dec  1 05:07:34 np0005540825 podman[235748]: 2025-12-01 10:07:34.146352367 +0000 UTC m=+0.030337549 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec  1 05:07:34 np0005540825 python3[235674]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec  1 05:07:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-crash-compute-0[79836]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Dec  1 05:07:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:07:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:07:34.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:07:34 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:07:34 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v497: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec  1 05:07:35 np0005540825 python3.9[235938]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 05:07:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:07:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:07:35.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:07:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/100735 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 05:07:36 np0005540825 python3.9[236096]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:07:36 np0005540825 python3.9[236172]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 05:07:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:36 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:07:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:36 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:07:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:36 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:07:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:07:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:07:36.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:07:36 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v498: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Dec  1 05:07:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:07:37.107Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:07:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:07:37.109Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:07:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:07:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:07:37.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:07:37 np0005540825 python3.9[236324]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764583656.75584-1610-252419781614850/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:07:38 np0005540825 python3.9[236401]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 05:07:38 np0005540825 systemd[1]: Reloading.
Dec  1 05:07:38 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 05:07:38 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 05:07:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:07:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:07:38.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:07:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:07:38.934Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:07:38 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v499: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Dec  1 05:07:39 np0005540825 python3.9[236511]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 05:07:39 np0005540825 systemd[1]: Reloading.
Dec  1 05:07:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:07:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:07:39.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:07:39 np0005540825 podman[236515]: 2025-12-01 10:07:39.345690222 +0000 UTC m=+0.090442962 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 05:07:39 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 05:07:39 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_10:07:39
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', '.nfs', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'default.rgw.meta', 'images', 'vms', 'volumes', 'cephfs.cephfs.data', 'backups']
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 05:07:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:07:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:07:39 np0005540825 systemd[1]: Starting multipathd container...
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:07:39 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:07:39 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a778705dba1240d2a110fc00f7118354d9aac24e8d97a8597c372b862d4c6b92/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  1 05:07:39 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a778705dba1240d2a110fc00f7118354d9aac24e8d97a8597c372b862d4c6b92/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:07:39 np0005540825 systemd[1]: Started /usr/bin/podman healthcheck run 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239.
Dec  1 05:07:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:07:39 np0005540825 podman[236572]: 2025-12-01 10:07:39.798110341 +0000 UTC m=+0.140902253 container init 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd)
Dec  1 05:07:39 np0005540825 multipathd[236587]: + sudo -E kolla_set_configs
Dec  1 05:07:39 np0005540825 podman[236572]: 2025-12-01 10:07:39.834911224 +0000 UTC m=+0.177703136 container start 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 05:07:39 np0005540825 podman[236572]: multipathd
Dec  1 05:07:39 np0005540825 systemd[1]: Started multipathd container.
Dec  1 05:07:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:07:39 np0005540825 multipathd[236587]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  1 05:07:39 np0005540825 multipathd[236587]: INFO:__main__:Validating config file
Dec  1 05:07:39 np0005540825 multipathd[236587]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  1 05:07:39 np0005540825 multipathd[236587]: INFO:__main__:Writing out command to execute
Dec  1 05:07:39 np0005540825 multipathd[236587]: ++ cat /run_command
Dec  1 05:07:39 np0005540825 multipathd[236587]: + CMD='/usr/sbin/multipathd -d'
Dec  1 05:07:39 np0005540825 multipathd[236587]: + ARGS=
Dec  1 05:07:39 np0005540825 multipathd[236587]: + sudo kolla_copy_cacerts
Dec  1 05:07:39 np0005540825 multipathd[236587]: + [[ ! -n '' ]]
Dec  1 05:07:39 np0005540825 multipathd[236587]: + . kolla_extend_start
Dec  1 05:07:39 np0005540825 multipathd[236587]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec  1 05:07:39 np0005540825 multipathd[236587]: Running command: '/usr/sbin/multipathd -d'
Dec  1 05:07:39 np0005540825 multipathd[236587]: + umask 0022
Dec  1 05:07:39 np0005540825 multipathd[236587]: + exec /usr/sbin/multipathd -d
Dec  1 05:07:39 np0005540825 podman[236594]: 2025-12-01 10:07:39.939096426 +0000 UTC m=+0.084859121 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 05:07:39 np0005540825 systemd[1]: 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239-33d66bbf9839664b.service: Main process exited, code=exited, status=1/FAILURE
Dec  1 05:07:39 np0005540825 systemd[1]: 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239-33d66bbf9839664b.service: Failed with result 'exit-code'.
Dec  1 05:07:39 np0005540825 multipathd[236587]: 3528.644183 | --------start up--------
Dec  1 05:07:39 np0005540825 multipathd[236587]: 3528.644207 | read /etc/multipath.conf
Dec  1 05:07:39 np0005540825 multipathd[236587]: 3528.650945 | path checkers start up
Dec  1 05:07:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:07:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:07:40.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:07:40 np0005540825 python3.9[236774]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 05:07:40 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v500: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec  1 05:07:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:40 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:07:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:40 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:07:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:40 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:07:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:41 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:07:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:07:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:07:41.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:07:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:07:41] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec  1 05:07:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:07:41] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec  1 05:07:41 np0005540825 python3.9[236954]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 05:07:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:41 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:07:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:41 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:07:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:41 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:07:42 np0005540825 python3.9[237120]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 05:07:42 np0005540825 systemd[1]: Stopping multipathd container...
Dec  1 05:07:42 np0005540825 multipathd[236587]: 3531.295021 | exit (signal)
Dec  1 05:07:42 np0005540825 multipathd[236587]: 3531.295850 | --------shut down-------
Dec  1 05:07:42 np0005540825 systemd[1]: libpod-7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239.scope: Deactivated successfully.
Dec  1 05:07:42 np0005540825 podman[237124]: 2025-12-01 10:07:42.636945588 +0000 UTC m=+0.075351095 container died 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 05:07:42 np0005540825 systemd[1]: 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239-33d66bbf9839664b.timer: Deactivated successfully.
Dec  1 05:07:42 np0005540825 systemd[1]: Stopped /usr/bin/podman healthcheck run 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239.
Dec  1 05:07:42 np0005540825 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239-userdata-shm.mount: Deactivated successfully.
Dec  1 05:07:42 np0005540825 systemd[1]: var-lib-containers-storage-overlay-a778705dba1240d2a110fc00f7118354d9aac24e8d97a8597c372b862d4c6b92-merged.mount: Deactivated successfully.
Dec  1 05:07:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:07:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:07:42.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:07:42 np0005540825 podman[237124]: 2025-12-01 10:07:42.838406134 +0000 UTC m=+0.276811611 container cleanup 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 05:07:42 np0005540825 podman[237124]: multipathd
Dec  1 05:07:42 np0005540825 podman[237149]: multipathd
Dec  1 05:07:42 np0005540825 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Dec  1 05:07:42 np0005540825 systemd[1]: Stopped multipathd container.
Dec  1 05:07:42 np0005540825 systemd[1]: Starting multipathd container...
Dec  1 05:07:43 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v501: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s
Dec  1 05:07:43 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:07:43 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a778705dba1240d2a110fc00f7118354d9aac24e8d97a8597c372b862d4c6b92/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  1 05:07:43 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a778705dba1240d2a110fc00f7118354d9aac24e8d97a8597c372b862d4c6b92/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  1 05:07:43 np0005540825 systemd[1]: Started /usr/bin/podman healthcheck run 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239.
Dec  1 05:07:43 np0005540825 podman[237162]: 2025-12-01 10:07:43.13098896 +0000 UTC m=+0.144478670 container init 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Dec  1 05:07:43 np0005540825 multipathd[237177]: + sudo -E kolla_set_configs
Dec  1 05:07:43 np0005540825 podman[237162]: 2025-12-01 10:07:43.162797858 +0000 UTC m=+0.176287528 container start 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 05:07:43 np0005540825 podman[237162]: multipathd
Dec  1 05:07:43 np0005540825 systemd[1]: Started multipathd container.
Dec  1 05:07:43 np0005540825 multipathd[237177]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  1 05:07:43 np0005540825 multipathd[237177]: INFO:__main__:Validating config file
Dec  1 05:07:43 np0005540825 multipathd[237177]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  1 05:07:43 np0005540825 multipathd[237177]: INFO:__main__:Writing out command to execute
Dec  1 05:07:43 np0005540825 multipathd[237177]: ++ cat /run_command
Dec  1 05:07:43 np0005540825 multipathd[237177]: + CMD='/usr/sbin/multipathd -d'
Dec  1 05:07:43 np0005540825 multipathd[237177]: + ARGS=
Dec  1 05:07:43 np0005540825 multipathd[237177]: + sudo kolla_copy_cacerts
Dec  1 05:07:43 np0005540825 multipathd[237177]: + [[ ! -n '' ]]
Dec  1 05:07:43 np0005540825 multipathd[237177]: + . kolla_extend_start
Dec  1 05:07:43 np0005540825 multipathd[237177]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec  1 05:07:43 np0005540825 multipathd[237177]: Running command: '/usr/sbin/multipathd -d'
Dec  1 05:07:43 np0005540825 multipathd[237177]: + umask 0022
Dec  1 05:07:43 np0005540825 multipathd[237177]: + exec /usr/sbin/multipathd -d
Dec  1 05:07:43 np0005540825 podman[237185]: 2025-12-01 10:07:43.285824898 +0000 UTC m=+0.105972471 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  1 05:07:43 np0005540825 systemd[1]: 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239-5e4c7f5f66119a33.service: Main process exited, code=exited, status=1/FAILURE
Dec  1 05:07:43 np0005540825 systemd[1]: 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239-5e4c7f5f66119a33.service: Failed with result 'exit-code'.
Dec  1 05:07:43 np0005540825 multipathd[237177]: 3531.985658 | --------start up--------
Dec  1 05:07:43 np0005540825 multipathd[237177]: 3531.985687 | read /etc/multipath.conf
Dec  1 05:07:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:07:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:07:43.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:07:43 np0005540825 multipathd[237177]: 3531.992672 | path checkers start up
Dec  1 05:07:43 np0005540825 python3.9[237371]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:07:44 np0005540825 systemd[1]: virtqemud.service: Deactivated successfully.
Dec  1 05:07:44 np0005540825 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec  1 05:07:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:07:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:07:44.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:07:44 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:07:45 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v502: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s
Dec  1 05:07:45 np0005540825 python3.9[237525]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  1 05:07:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:07:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:07:45.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:07:45 np0005540825 python3.9[237679]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Dec  1 05:07:45 np0005540825 kernel: Key type psk registered
Dec  1 05:07:46 np0005540825 python3.9[237841]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:07:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:07:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:07:46.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:07:47 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v503: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Dec  1 05:07:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:07:47.110Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:07:47 np0005540825 python3.9[237965]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764583666.1704893-1850-172862309205367/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:07:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:07:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:07:47.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:07:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:47 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  1 05:07:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:47 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec  1 05:07:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:47 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec  1 05:07:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:47 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec  1 05:07:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:47 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec  1 05:07:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:47 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec  1 05:07:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:47 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec  1 05:07:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:47 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  1 05:07:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:47 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  1 05:07:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:47 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  1 05:07:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:47 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec  1 05:07:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:47 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  1 05:07:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:47 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec  1 05:07:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:47 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec  1 05:07:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:47 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec  1 05:07:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:47 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec  1 05:07:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:47 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec  1 05:07:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:47 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec  1 05:07:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:47 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec  1 05:07:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:47 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec  1 05:07:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:47 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec  1 05:07:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:47 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec  1 05:07:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:47 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec  1 05:07:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:47 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec  1 05:07:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:47 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  1 05:07:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:47 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec  1 05:07:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:47 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  1 05:07:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:47 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:07:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:47 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef9c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:48 np0005540825 python3.9[238133]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:07:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:48 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef90001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:07:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:07:48.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:07:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:07:48.939Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:07:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:07:48.939Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:07:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:07:48.940Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:07:49 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v504: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Dec  1 05:07:49 np0005540825 python3.9[238285]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 05:07:49 np0005540825 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec  1 05:07:49 np0005540825 systemd[1]: Stopped Load Kernel Modules.
Dec  1 05:07:49 np0005540825 systemd[1]: Stopping Load Kernel Modules...
Dec  1 05:07:49 np0005540825 systemd[1]: Starting Load Kernel Modules...
Dec  1 05:07:49 np0005540825 systemd[1]: Finished Load Kernel Modules.
Dec  1 05:07:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:07:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:07:49.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:07:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:49 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef78000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:49 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:07:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:49 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef74000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:50 np0005540825 python3.9[238443]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 05:07:50 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/100750 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  1 05:07:50 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:50 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef80000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:50 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:50 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:07:50 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:50 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:07:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:07:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:07:50.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:07:51 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v505: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Dec  1 05:07:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:07:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:07:51.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:07:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:51 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef90001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:07:51] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec  1 05:07:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:07:51] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec  1 05:07:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:51 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef780016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:52 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef740016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:52 np0005540825 systemd[1]: Reloading.
Dec  1 05:07:52 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 05:07:52 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 05:07:52 np0005540825 systemd[1]: Reloading.
Dec  1 05:07:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:07:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:07:52.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:07:52 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 05:07:52 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 05:07:53 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v506: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Dec  1 05:07:53 np0005540825 systemd-logind[789]: Watching system buttons on /dev/input/event0 (Power Button)
Dec  1 05:07:53 np0005540825 systemd-logind[789]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec  1 05:07:53 np0005540825 lvm[238568]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 05:07:53 np0005540825 lvm[238568]: VG ceph_vg0 finished
Dec  1 05:07:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:53 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef80001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.005000135s ======
Dec  1 05:07:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:07:53.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.005000135s
Dec  1 05:07:53 np0005540825 podman[238557]: 2025-12-01 10:07:53.380292711 +0000 UTC m=+0.114646525 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller)
Dec  1 05:07:53 np0005540825 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  1 05:07:53 np0005540825 systemd[1]: Starting man-db-cache-update.service...
Dec  1 05:07:53 np0005540825 systemd[1]: Reloading.
Dec  1 05:07:53 np0005540825 ceph-mgr[74709]: [dashboard INFO request] [192.168.122.100:60632] [POST] [200] [0.002s] [4.0B] [1f7cde19-2463-47f7-8671-ea9c413f6c07] /api/prometheus_receiver
Dec  1 05:07:53 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 05:07:53 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 05:07:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:53 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  1 05:07:53 np0005540825 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  1 05:07:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:53 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef90001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:54 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef780016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:07:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:07:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:07:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:07:54.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:07:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:07:54 np0005540825 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  1 05:07:55 np0005540825 systemd[1]: Finished man-db-cache-update.service.
Dec  1 05:07:55 np0005540825 systemd[1]: man-db-cache-update.service: Consumed 1.775s CPU time.
Dec  1 05:07:55 np0005540825 systemd[1]: run-r48cda21904f948f4acecb4d483d961ea.service: Deactivated successfully.
Dec  1 05:07:55 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v507: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Dec  1 05:07:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:55 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef80001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:07:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:07:55.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:07:55 np0005540825 python3.9[239925]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 05:07:55 np0005540825 iscsid[227236]: iscsid shutting down.
Dec  1 05:07:55 np0005540825 systemd[1]: Stopping Open-iSCSI...
Dec  1 05:07:55 np0005540825 systemd[1]: iscsid.service: Deactivated successfully.
Dec  1 05:07:55 np0005540825 systemd[1]: Stopped Open-iSCSI.
Dec  1 05:07:55 np0005540825 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec  1 05:07:55 np0005540825 systemd[1]: Starting Open-iSCSI...
Dec  1 05:07:55 np0005540825 systemd[1]: Started Open-iSCSI.
Dec  1 05:07:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:55 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef74001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:56 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef90001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:56 np0005540825 python3.9[240080]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 05:07:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:07:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:07:56.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:07:57 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v508: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Dec  1 05:07:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:07:57.113Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:07:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:57 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef780016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/100757 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  1 05:07:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:07:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:07:57.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:07:57 np0005540825 python3.9[240238]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:07:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:57 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef80001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:58 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef74001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:07:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:07:58.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:07:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:07:58.941Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:07:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:07:58.942Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:07:58 np0005540825 python3.9[240390]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 05:07:59 np0005540825 systemd[1]: Reloading.
Dec  1 05:07:59 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v509: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 596 B/s wr, 2 op/s
Dec  1 05:07:59 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 05:07:59 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 05:07:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:59 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef90001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:07:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:07:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:07:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:07:59.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:07:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:07:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:07:59 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef78002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:00 np0005540825 python3.9[240577]: ansible-ansible.builtin.service_facts Invoked
Dec  1 05:08:00 np0005540825 network[240594]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  1 05:08:00 np0005540825 network[240595]: 'network-scripts' will be removed from distribution in near future.
Dec  1 05:08:00 np0005540825 network[240596]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  1 05:08:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:08:00 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef80001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:08:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:08:00.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:08:01 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v510: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 597 B/s wr, 2 op/s
Dec  1 05:08:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:08:01 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef74001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:08:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:08:01.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:08:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:08:01] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec  1 05:08:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:08:01] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec  1 05:08:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 05:08:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  1 05:08:01 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:08:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 05:08:01 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:08:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  1 05:08:01 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:08:01 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:08:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:08:01 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef90001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:02 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:08:02 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:08:02 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:08:02 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:08:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[235502]: 01/12/2025 10:08:02 : epoch 692d68e2 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fef90001c00 fd 39 proxy ignored for local
Dec  1 05:08:02 np0005540825 kernel: ganesha.nfsd[238040]: segfault at 50 ip 00007ff04923f32e sp 00007ff00f7fd210 error 4 in libntirpc.so.5.8[7ff049224000+2c000] likely on CPU 5 (core 0, socket 5)
Dec  1 05:08:02 np0005540825 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec  1 05:08:02 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:08:02 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:08:02 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 05:08:02 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:08:02 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v511: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 637 B/s rd, 182 B/s wr, 0 op/s
Dec  1 05:08:02 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 05:08:02 np0005540825 systemd[1]: Started Process Core Dump (PID 240833/UID 0).
Dec  1 05:08:02 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:08:02 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 05:08:02 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:08:02 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 05:08:02 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 05:08:02 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 05:08:02 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:08:02 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:08:02 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:08:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:08:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:08:02.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:08:02 np0005540825 podman[240950]: 2025-12-01 10:08:02.962401907 +0000 UTC m=+0.061341256 container create a8e6d7c5ca7cf814001e6e46da458266db77017a9bb5707c57cd1e5aa5a69819 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_moser, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec  1 05:08:03 np0005540825 systemd[1]: Started libpod-conmon-a8e6d7c5ca7cf814001e6e46da458266db77017a9bb5707c57cd1e5aa5a69819.scope.
Dec  1 05:08:03 np0005540825 podman[240950]: 2025-12-01 10:08:02.934751511 +0000 UTC m=+0.033690950 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:08:03 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:08:03 np0005540825 podman[240950]: 2025-12-01 10:08:03.081566463 +0000 UTC m=+0.180505912 container init a8e6d7c5ca7cf814001e6e46da458266db77017a9bb5707c57cd1e5aa5a69819 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_moser, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec  1 05:08:03 np0005540825 podman[240950]: 2025-12-01 10:08:03.097556394 +0000 UTC m=+0.196495733 container start a8e6d7c5ca7cf814001e6e46da458266db77017a9bb5707c57cd1e5aa5a69819 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325)
Dec  1 05:08:03 np0005540825 podman[240950]: 2025-12-01 10:08:03.101020048 +0000 UTC m=+0.199959487 container attach a8e6d7c5ca7cf814001e6e46da458266db77017a9bb5707c57cd1e5aa5a69819 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:08:03 np0005540825 gracious_moser[240969]: 167 167
Dec  1 05:08:03 np0005540825 podman[240950]: 2025-12-01 10:08:03.109375133 +0000 UTC m=+0.208314542 container died a8e6d7c5ca7cf814001e6e46da458266db77017a9bb5707c57cd1e5aa5a69819 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_moser, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:08:03 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:08:03 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:08:03 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:08:03 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:08:03 np0005540825 systemd[1]: libpod-a8e6d7c5ca7cf814001e6e46da458266db77017a9bb5707c57cd1e5aa5a69819.scope: Deactivated successfully.
Dec  1 05:08:03 np0005540825 systemd[1]: var-lib-containers-storage-overlay-880713d9aa1cbe2a7e35766c0e8d5e13ffa5e37ab5e8ce21efe6262aa3a8e6c5-merged.mount: Deactivated successfully.
Dec  1 05:08:03 np0005540825 podman[240950]: 2025-12-01 10:08:03.156945177 +0000 UTC m=+0.255884536 container remove a8e6d7c5ca7cf814001e6e46da458266db77017a9bb5707c57cd1e5aa5a69819 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_moser, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  1 05:08:03 np0005540825 systemd[1]: libpod-conmon-a8e6d7c5ca7cf814001e6e46da458266db77017a9bb5707c57cd1e5aa5a69819.scope: Deactivated successfully.
Dec  1 05:08:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:08:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:08:03.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:08:03 np0005540825 podman[241001]: 2025-12-01 10:08:03.397505438 +0000 UTC m=+0.095957710 container create 8b6c681dd605e559870c930c717c54bb1f530e1dc773371c096a0cf4238673bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_shtern, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  1 05:08:03 np0005540825 podman[241001]: 2025-12-01 10:08:03.346448481 +0000 UTC m=+0.044900823 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:08:03 np0005540825 systemd[1]: Started libpod-conmon-8b6c681dd605e559870c930c717c54bb1f530e1dc773371c096a0cf4238673bc.scope.
Dec  1 05:08:03 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:08:03 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c9acb47ddb7ed05848aef2eebc6c63c9cbb261fd2a63a2df9dfa091a0cc31f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:08:03 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c9acb47ddb7ed05848aef2eebc6c63c9cbb261fd2a63a2df9dfa091a0cc31f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:08:03 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c9acb47ddb7ed05848aef2eebc6c63c9cbb261fd2a63a2df9dfa091a0cc31f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:08:03 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c9acb47ddb7ed05848aef2eebc6c63c9cbb261fd2a63a2df9dfa091a0cc31f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:08:03 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c9acb47ddb7ed05848aef2eebc6c63c9cbb261fd2a63a2df9dfa091a0cc31f0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:08:03 np0005540825 podman[241001]: 2025-12-01 10:08:03.50983621 +0000 UTC m=+0.208288502 container init 8b6c681dd605e559870c930c717c54bb1f530e1dc773371c096a0cf4238673bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  1 05:08:03 np0005540825 podman[241001]: 2025-12-01 10:08:03.517890857 +0000 UTC m=+0.216343119 container start 8b6c681dd605e559870c930c717c54bb1f530e1dc773371c096a0cf4238673bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_shtern, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec  1 05:08:03 np0005540825 podman[241001]: 2025-12-01 10:08:03.521246478 +0000 UTC m=+0.219698810 container attach 8b6c681dd605e559870c930c717c54bb1f530e1dc773371c096a0cf4238673bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:08:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:08:03.566Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:08:03 np0005540825 loving_shtern[241025]: --> passed data devices: 0 physical, 1 LVM
Dec  1 05:08:03 np0005540825 loving_shtern[241025]: --> All data devices are unavailable
Dec  1 05:08:03 np0005540825 systemd-coredump[240835]: Process 235506 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 43:#012#0  0x00007ff04923f32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Dec  1 05:08:03 np0005540825 systemd[1]: libpod-8b6c681dd605e559870c930c717c54bb1f530e1dc773371c096a0cf4238673bc.scope: Deactivated successfully.
Dec  1 05:08:03 np0005540825 podman[241001]: 2025-12-01 10:08:03.910260055 +0000 UTC m=+0.608712307 container died 8b6c681dd605e559870c930c717c54bb1f530e1dc773371c096a0cf4238673bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_shtern, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec  1 05:08:03 np0005540825 systemd[1]: var-lib-containers-storage-overlay-1c9acb47ddb7ed05848aef2eebc6c63c9cbb261fd2a63a2df9dfa091a0cc31f0-merged.mount: Deactivated successfully.
Dec  1 05:08:03 np0005540825 podman[241001]: 2025-12-01 10:08:03.964326494 +0000 UTC m=+0.662778746 container remove 8b6c681dd605e559870c930c717c54bb1f530e1dc773371c096a0cf4238673bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_shtern, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:08:03 np0005540825 systemd[1]: libpod-conmon-8b6c681dd605e559870c930c717c54bb1f530e1dc773371c096a0cf4238673bc.scope: Deactivated successfully.
Dec  1 05:08:04 np0005540825 systemd[1]: systemd-coredump@5-240833-0.service: Deactivated successfully.
Dec  1 05:08:04 np0005540825 systemd[1]: systemd-coredump@5-240833-0.service: Consumed 1.656s CPU time.
Dec  1 05:08:04 np0005540825 podman[241096]: 2025-12-01 10:08:04.054902419 +0000 UTC m=+0.025409447 container died ee2ae285e2bb503379717da814193004352bc37ffe396cf6c800880d666d92ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  1 05:08:04 np0005540825 systemd[1]: var-lib-containers-storage-overlay-12e4207408b4af6ddffe216286562d7248329f7dc28bdefd2ab2e953a2275aa8-merged.mount: Deactivated successfully.
Dec  1 05:08:04 np0005540825 podman[241096]: 2025-12-01 10:08:04.099022519 +0000 UTC m=+0.069529517 container remove ee2ae285e2bb503379717da814193004352bc37ffe396cf6c800880d666d92ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  1 05:08:04 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Main process exited, code=exited, status=139/n/a
Dec  1 05:08:04 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v512: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 637 B/s rd, 182 B/s wr, 0 op/s
Dec  1 05:08:04 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Failed with result 'exit-code'.
Dec  1 05:08:04 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Consumed 1.729s CPU time.
Dec  1 05:08:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:08:04.560 163291 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:08:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:08:04.563 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:08:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:08:04.563 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:08:04 np0005540825 podman[241230]: 2025-12-01 10:08:04.58857005 +0000 UTC m=+0.054137112 container create 7396db109f85b931d33fb0c052fa46b19c6d51035adcee3de5cf47b0b653464f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_dijkstra, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid)
Dec  1 05:08:04 np0005540825 systemd[1]: Started libpod-conmon-7396db109f85b931d33fb0c052fa46b19c6d51035adcee3de5cf47b0b653464f.scope.
Dec  1 05:08:04 np0005540825 podman[241230]: 2025-12-01 10:08:04.560669557 +0000 UTC m=+0.026236709 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:08:04 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:08:04 np0005540825 podman[241230]: 2025-12-01 10:08:04.679286378 +0000 UTC m=+0.144853530 container init 7396db109f85b931d33fb0c052fa46b19c6d51035adcee3de5cf47b0b653464f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_dijkstra, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:08:04 np0005540825 podman[241230]: 2025-12-01 10:08:04.687372536 +0000 UTC m=+0.152939628 container start 7396db109f85b931d33fb0c052fa46b19c6d51035adcee3de5cf47b0b653464f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_dijkstra, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec  1 05:08:04 np0005540825 podman[241230]: 2025-12-01 10:08:04.691607211 +0000 UTC m=+0.157174373 container attach 7396db109f85b931d33fb0c052fa46b19c6d51035adcee3de5cf47b0b653464f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_dijkstra, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  1 05:08:04 np0005540825 pensive_dijkstra[241247]: 167 167
Dec  1 05:08:04 np0005540825 systemd[1]: libpod-7396db109f85b931d33fb0c052fa46b19c6d51035adcee3de5cf47b0b653464f.scope: Deactivated successfully.
Dec  1 05:08:04 np0005540825 podman[241230]: 2025-12-01 10:08:04.697814438 +0000 UTC m=+0.163381530 container died 7396db109f85b931d33fb0c052fa46b19c6d51035adcee3de5cf47b0b653464f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:08:04 np0005540825 systemd[1]: var-lib-containers-storage-overlay-4a0660f10af5b37b430d83b88ba6ecbbf1da6cb6d812e02e7008ad78d24ab5be-merged.mount: Deactivated successfully.
Dec  1 05:08:04 np0005540825 podman[241230]: 2025-12-01 10:08:04.749919354 +0000 UTC m=+0.215486446 container remove 7396db109f85b931d33fb0c052fa46b19c6d51035adcee3de5cf47b0b653464f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_dijkstra, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  1 05:08:04 np0005540825 systemd[1]: libpod-conmon-7396db109f85b931d33fb0c052fa46b19c6d51035adcee3de5cf47b0b653464f.scope: Deactivated successfully.
Dec  1 05:08:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:08:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:08:04.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:08:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:08:05 np0005540825 podman[241270]: 2025-12-01 10:08:05.0013739 +0000 UTC m=+0.068851719 container create d7129e660cd81838f958b9948e05a9b73c0317af241d0bfd82b2123449e9451a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_goodall, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:08:05 np0005540825 systemd[1]: Started libpod-conmon-d7129e660cd81838f958b9948e05a9b73c0317af241d0bfd82b2123449e9451a.scope.
Dec  1 05:08:05 np0005540825 podman[241270]: 2025-12-01 10:08:04.975784159 +0000 UTC m=+0.043262038 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:08:05 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:08:05 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a77b88f9b92581b47e6fd90da835ba9096e3cc29912cad92ee691feb7141a904/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:08:05 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a77b88f9b92581b47e6fd90da835ba9096e3cc29912cad92ee691feb7141a904/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:08:05 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a77b88f9b92581b47e6fd90da835ba9096e3cc29912cad92ee691feb7141a904/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:08:05 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a77b88f9b92581b47e6fd90da835ba9096e3cc29912cad92ee691feb7141a904/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:08:05 np0005540825 podman[241270]: 2025-12-01 10:08:05.090208147 +0000 UTC m=+0.157686046 container init d7129e660cd81838f958b9948e05a9b73c0317af241d0bfd82b2123449e9451a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_goodall, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec  1 05:08:05 np0005540825 podman[241270]: 2025-12-01 10:08:05.103748432 +0000 UTC m=+0.171226231 container start d7129e660cd81838f958b9948e05a9b73c0317af241d0bfd82b2123449e9451a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_goodall, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:08:05 np0005540825 podman[241270]: 2025-12-01 10:08:05.107490793 +0000 UTC m=+0.174968702 container attach d7129e660cd81838f958b9948e05a9b73c0317af241d0bfd82b2123449e9451a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_goodall, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:08:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:08:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:08:05.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:08:05 np0005540825 great_goodall[241286]: {
Dec  1 05:08:05 np0005540825 great_goodall[241286]:    "1": [
Dec  1 05:08:05 np0005540825 great_goodall[241286]:        {
Dec  1 05:08:05 np0005540825 great_goodall[241286]:            "devices": [
Dec  1 05:08:05 np0005540825 great_goodall[241286]:                "/dev/loop3"
Dec  1 05:08:05 np0005540825 great_goodall[241286]:            ],
Dec  1 05:08:05 np0005540825 great_goodall[241286]:            "lv_name": "ceph_lv0",
Dec  1 05:08:05 np0005540825 great_goodall[241286]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:08:05 np0005540825 great_goodall[241286]:            "lv_size": "21470642176",
Dec  1 05:08:05 np0005540825 great_goodall[241286]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 05:08:05 np0005540825 great_goodall[241286]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:08:05 np0005540825 great_goodall[241286]:            "name": "ceph_lv0",
Dec  1 05:08:05 np0005540825 great_goodall[241286]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:08:05 np0005540825 great_goodall[241286]:            "tags": {
Dec  1 05:08:05 np0005540825 great_goodall[241286]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:08:05 np0005540825 great_goodall[241286]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:08:05 np0005540825 great_goodall[241286]:                "ceph.cephx_lockbox_secret": "",
Dec  1 05:08:05 np0005540825 great_goodall[241286]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 05:08:05 np0005540825 great_goodall[241286]:                "ceph.cluster_name": "ceph",
Dec  1 05:08:05 np0005540825 great_goodall[241286]:                "ceph.crush_device_class": "",
Dec  1 05:08:05 np0005540825 great_goodall[241286]:                "ceph.encrypted": "0",
Dec  1 05:08:05 np0005540825 great_goodall[241286]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 05:08:05 np0005540825 great_goodall[241286]:                "ceph.osd_id": "1",
Dec  1 05:08:05 np0005540825 great_goodall[241286]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 05:08:05 np0005540825 great_goodall[241286]:                "ceph.type": "block",
Dec  1 05:08:05 np0005540825 great_goodall[241286]:                "ceph.vdo": "0",
Dec  1 05:08:05 np0005540825 great_goodall[241286]:                "ceph.with_tpm": "0"
Dec  1 05:08:05 np0005540825 great_goodall[241286]:            },
Dec  1 05:08:05 np0005540825 great_goodall[241286]:            "type": "block",
Dec  1 05:08:05 np0005540825 great_goodall[241286]:            "vg_name": "ceph_vg0"
Dec  1 05:08:05 np0005540825 great_goodall[241286]:        }
Dec  1 05:08:05 np0005540825 great_goodall[241286]:    ]
Dec  1 05:08:05 np0005540825 great_goodall[241286]: }
Dec  1 05:08:05 np0005540825 systemd[1]: libpod-d7129e660cd81838f958b9948e05a9b73c0317af241d0bfd82b2123449e9451a.scope: Deactivated successfully.
Dec  1 05:08:05 np0005540825 podman[241270]: 2025-12-01 10:08:05.433242534 +0000 UTC m=+0.500720353 container died d7129e660cd81838f958b9948e05a9b73c0317af241d0bfd82b2123449e9451a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_goodall, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True)
Dec  1 05:08:05 np0005540825 systemd[1]: var-lib-containers-storage-overlay-a77b88f9b92581b47e6fd90da835ba9096e3cc29912cad92ee691feb7141a904-merged.mount: Deactivated successfully.
Dec  1 05:08:05 np0005540825 podman[241270]: 2025-12-01 10:08:05.489765409 +0000 UTC m=+0.557243238 container remove d7129e660cd81838f958b9948e05a9b73c0317af241d0bfd82b2123449e9451a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_goodall, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:08:05 np0005540825 systemd[1]: libpod-conmon-d7129e660cd81838f958b9948e05a9b73c0317af241d0bfd82b2123449e9451a.scope: Deactivated successfully.
Dec  1 05:08:06 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v513: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 820 B/s rd, 182 B/s wr, 1 op/s
Dec  1 05:08:06 np0005540825 podman[241530]: 2025-12-01 10:08:06.271519034 +0000 UTC m=+0.070606346 container create 2b3d7872f5ca8ede8ddb96c1a82b4bc1e8ca0fe2cf17c45bc720dbef83611320 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_perlman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:08:06 np0005540825 podman[241530]: 2025-12-01 10:08:06.241021411 +0000 UTC m=+0.040108764 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:08:06 np0005540825 systemd[1]: Started libpod-conmon-2b3d7872f5ca8ede8ddb96c1a82b4bc1e8ca0fe2cf17c45bc720dbef83611320.scope.
Dec  1 05:08:06 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:08:06 np0005540825 python3.9[241511]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 05:08:06 np0005540825 podman[241530]: 2025-12-01 10:08:06.389669543 +0000 UTC m=+0.188756905 container init 2b3d7872f5ca8ede8ddb96c1a82b4bc1e8ca0fe2cf17c45bc720dbef83611320 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:08:06 np0005540825 podman[241530]: 2025-12-01 10:08:06.401214224 +0000 UTC m=+0.200301536 container start 2b3d7872f5ca8ede8ddb96c1a82b4bc1e8ca0fe2cf17c45bc720dbef83611320 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec  1 05:08:06 np0005540825 podman[241530]: 2025-12-01 10:08:06.404543824 +0000 UTC m=+0.203631126 container attach 2b3d7872f5ca8ede8ddb96c1a82b4bc1e8ca0fe2cf17c45bc720dbef83611320 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_perlman, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec  1 05:08:06 np0005540825 great_perlman[241547]: 167 167
Dec  1 05:08:06 np0005540825 systemd[1]: libpod-2b3d7872f5ca8ede8ddb96c1a82b4bc1e8ca0fe2cf17c45bc720dbef83611320.scope: Deactivated successfully.
Dec  1 05:08:06 np0005540825 podman[241530]: 2025-12-01 10:08:06.410105084 +0000 UTC m=+0.209192396 container died 2b3d7872f5ca8ede8ddb96c1a82b4bc1e8ca0fe2cf17c45bc720dbef83611320 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  1 05:08:06 np0005540825 systemd[1]: var-lib-containers-storage-overlay-5208c3cd397c11da29527d13c918d099d17c0d340bad1633ce9cde695613ce6d-merged.mount: Deactivated successfully.
Dec  1 05:08:06 np0005540825 podman[241530]: 2025-12-01 10:08:06.45627226 +0000 UTC m=+0.255359542 container remove 2b3d7872f5ca8ede8ddb96c1a82b4bc1e8ca0fe2cf17c45bc720dbef83611320 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_perlman, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True)
Dec  1 05:08:06 np0005540825 systemd[1]: libpod-conmon-2b3d7872f5ca8ede8ddb96c1a82b4bc1e8ca0fe2cf17c45bc720dbef83611320.scope: Deactivated successfully.
Dec  1 05:08:06 np0005540825 podman[241622]: 2025-12-01 10:08:06.671714024 +0000 UTC m=+0.059979530 container create f0775613a61d48e0a5df800e3f068b3d39a7fdb72af9d78a84bd5387f2177acb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_williamson, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  1 05:08:06 np0005540825 systemd[1]: Started libpod-conmon-f0775613a61d48e0a5df800e3f068b3d39a7fdb72af9d78a84bd5387f2177acb.scope.
Dec  1 05:08:06 np0005540825 podman[241622]: 2025-12-01 10:08:06.647352196 +0000 UTC m=+0.035617722 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:08:06 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:08:06 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/057c0e55089267a00888cb39215dfec502a2613f75bf59b6441f990976d728db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:08:06 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/057c0e55089267a00888cb39215dfec502a2613f75bf59b6441f990976d728db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:08:06 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/057c0e55089267a00888cb39215dfec502a2613f75bf59b6441f990976d728db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:08:06 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/057c0e55089267a00888cb39215dfec502a2613f75bf59b6441f990976d728db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:08:06 np0005540825 podman[241622]: 2025-12-01 10:08:06.770444548 +0000 UTC m=+0.158710044 container init f0775613a61d48e0a5df800e3f068b3d39a7fdb72af9d78a84bd5387f2177acb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_williamson, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:08:06 np0005540825 podman[241622]: 2025-12-01 10:08:06.781294461 +0000 UTC m=+0.169559957 container start f0775613a61d48e0a5df800e3f068b3d39a7fdb72af9d78a84bd5387f2177acb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec  1 05:08:06 np0005540825 podman[241622]: 2025-12-01 10:08:06.785098504 +0000 UTC m=+0.173363980 container attach f0775613a61d48e0a5df800e3f068b3d39a7fdb72af9d78a84bd5387f2177acb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec  1 05:08:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:08:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:08:06.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:08:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:08:07.115Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:08:07 np0005540825 python3.9[241744]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 05:08:07 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:07 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:08:07 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:08:07.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:08:07 np0005540825 lvm[241910]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 05:08:07 np0005540825 lvm[241910]: VG ceph_vg0 finished
Dec  1 05:08:07 np0005540825 eager_williamson[241687]: {}
Dec  1 05:08:07 np0005540825 systemd[1]: libpod-f0775613a61d48e0a5df800e3f068b3d39a7fdb72af9d78a84bd5387f2177acb.scope: Deactivated successfully.
Dec  1 05:08:07 np0005540825 systemd[1]: libpod-f0775613a61d48e0a5df800e3f068b3d39a7fdb72af9d78a84bd5387f2177acb.scope: Consumed 1.206s CPU time.
Dec  1 05:08:07 np0005540825 podman[241622]: 2025-12-01 10:08:07.569541242 +0000 UTC m=+0.957806728 container died f0775613a61d48e0a5df800e3f068b3d39a7fdb72af9d78a84bd5387f2177acb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_williamson, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:08:07 np0005540825 systemd[1]: var-lib-containers-storage-overlay-057c0e55089267a00888cb39215dfec502a2613f75bf59b6441f990976d728db-merged.mount: Deactivated successfully.
Dec  1 05:08:07 np0005540825 podman[241622]: 2025-12-01 10:08:07.623690374 +0000 UTC m=+1.011955850 container remove f0775613a61d48e0a5df800e3f068b3d39a7fdb72af9d78a84bd5387f2177acb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_williamson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:08:07 np0005540825 systemd[1]: libpod-conmon-f0775613a61d48e0a5df800e3f068b3d39a7fdb72af9d78a84bd5387f2177acb.scope: Deactivated successfully.
Dec  1 05:08:07 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 05:08:07 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:08:07 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 05:08:07 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:08:07 np0005540825 python3.9[241986]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 05:08:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/100808 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 05:08:08 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v514: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 182 B/s rd, 0 op/s
Dec  1 05:08:08 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:08:08 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:08:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:08:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:08:08.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:08:08 np0005540825 python3.9[242164]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 05:08:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:08:08.943Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:08:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:08:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:08:09.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:08:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:08:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:08:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:08:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:08:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:08:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:08:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:08:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:08:09 np0005540825 python3.9[242318]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 05:08:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:08:09 np0005540825 podman[242321]: 2025-12-01 10:08:09.893547906 +0000 UTC m=+0.064895912 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec  1 05:08:10 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v515: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 182 B/s rd, 0 op/s
Dec  1 05:08:10 np0005540825 python3.9[242491]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 05:08:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:08:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:08:10.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:08:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:08:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:08:11.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:08:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:08:11] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec  1 05:08:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:08:11] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec  1 05:08:11 np0005540825 python3.9[242645]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 05:08:12 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v516: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 182 B/s rd, 0 op/s
Dec  1 05:08:12 np0005540825 python3.9[242799]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 05:08:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:08:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:08:12.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:08:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:08:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:08:13.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:08:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:08:13.569Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:08:14 np0005540825 podman[242892]: 2025-12-01 10:08:14.233925045 +0000 UTC m=+0.089784274 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  1 05:08:14 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v517: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  1 05:08:14 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Scheduled restart job, restart counter is at 6.
Dec  1 05:08:14 np0005540825 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.pytvsu for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 05:08:14 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Consumed 1.729s CPU time.
Dec  1 05:08:14 np0005540825 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.pytvsu for 365f19c2-81e5-5edd-b6b4-280555214d3a...
Dec  1 05:08:14 np0005540825 python3.9[242972]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:08:14 np0005540825 podman[243020]: 2025-12-01 10:08:14.66951101 +0000 UTC m=+0.046124986 container create 622a92b2cdc67a6e8583860fa92bdde8dd0c70c580c37547ced38640adea5147 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:08:14 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ee1c7a0c1d9d147610305627d26ee6c3e77e51dab7a16d668c5e23ae4b23f87/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec  1 05:08:14 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ee1c7a0c1d9d147610305627d26ee6c3e77e51dab7a16d668c5e23ae4b23f87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:08:14 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ee1c7a0c1d9d147610305627d26ee6c3e77e51dab7a16d668c5e23ae4b23f87/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:08:14 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ee1c7a0c1d9d147610305627d26ee6c3e77e51dab7a16d668c5e23ae4b23f87/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.pytvsu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:08:14 np0005540825 podman[243020]: 2025-12-01 10:08:14.73584385 +0000 UTC m=+0.112457836 container init 622a92b2cdc67a6e8583860fa92bdde8dd0c70c580c37547ced38640adea5147 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:08:14 np0005540825 podman[243020]: 2025-12-01 10:08:14.646721185 +0000 UTC m=+0.023335171 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:08:14 np0005540825 podman[243020]: 2025-12-01 10:08:14.748548083 +0000 UTC m=+0.125162039 container start 622a92b2cdc67a6e8583860fa92bdde8dd0c70c580c37547ced38640adea5147 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  1 05:08:14 np0005540825 bash[243020]: 622a92b2cdc67a6e8583860fa92bdde8dd0c70c580c37547ced38640adea5147
Dec  1 05:08:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:14 : epoch 692d690e : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec  1 05:08:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:14 : epoch 692d690e : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec  1 05:08:14 np0005540825 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.pytvsu for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 05:08:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:14 : epoch 692d690e : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec  1 05:08:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:14 : epoch 692d690e : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec  1 05:08:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:14 : epoch 692d690e : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec  1 05:08:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:14 : epoch 692d690e : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec  1 05:08:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:14 : epoch 692d690e : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec  1 05:08:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:08:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:08:14.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:08:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:14 : epoch 692d690e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:08:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:08:15 np0005540825 python3.9[243229]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:08:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/100815 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 05:08:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:08:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:08:15.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:08:16 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v518: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec  1 05:08:16 np0005540825 python3.9[243382]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:08:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:08:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:08:16.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:08:17 np0005540825 python3.9[243534]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:08:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:08:17.116Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:08:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:08:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:08:17.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:08:17 np0005540825 python3.9[243687]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:08:18 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v519: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec  1 05:08:18 np0005540825 python3.9[243840]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:08:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:08:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:08:18.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:08:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:08:18.944Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:08:19 np0005540825 python3.9[243992]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:08:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:08:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:08:19.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:08:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:08:20 np0005540825 python3.9[244146]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:08:20 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v520: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec  1 05:08:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:08:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:08:20.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:08:20 np0005540825 python3.9[244298]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:08:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:20 : epoch 692d690e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:08:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:20 : epoch 692d690e : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:08:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:20 : epoch 692d690e : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:08:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:08:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:08:21.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:08:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:08:21] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec  1 05:08:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:08:21] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Dec  1 05:08:21 np0005540825 python3.9[244476]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:08:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:21 : epoch 692d690e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:08:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:21 : epoch 692d690e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:08:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:21 : epoch 692d690e : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:08:22 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v521: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Dec  1 05:08:22 np0005540825 python3.9[244629]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:08:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:08:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:08:22.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:08:23 np0005540825 python3.9[244781]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:08:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:08:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:08:23.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:08:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:08:23.571Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:08:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:08:23.572Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:08:23 np0005540825 podman[244907]: 2025-12-01 10:08:23.718696366 +0000 UTC m=+0.111927772 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 05:08:23 np0005540825 python3.9[244957]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:08:24 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v522: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Dec  1 05:08:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:08:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:08:24 np0005540825 python3.9[245116]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:08:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:08:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:08:24.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:08:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:08:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:08:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:08:25.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:08:25 np0005540825 python3.9[245269]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:08:26 np0005540825 python3.9[245422]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:08:26 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v523: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  1 05:08:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:08:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:08:26.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:08:27 np0005540825 python3.9[245574]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 05:08:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:08:27.117Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:08:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:08:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:08:27.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:08:27 np0005540825 ceph-osd[82809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  1 05:08:27 np0005540825 ceph-osd[82809]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 9221 writes, 35K keys, 9221 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 9221 writes, 2074 syncs, 4.45 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 826 writes, 1269 keys, 826 commit groups, 1.0 writes per commit group, ingest: 0.42 MB, 0.00 MB/s#012Interval WAL: 826 writes, 400 syncs, 2.06 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ea023ad350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ea023ad350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Dec  1 05:08:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:27 : epoch 692d690e : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  1 05:08:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:27 : epoch 692d690e : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec  1 05:08:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:27 : epoch 692d690e : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec  1 05:08:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:27 : epoch 692d690e : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec  1 05:08:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:27 : epoch 692d690e : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec  1 05:08:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:27 : epoch 692d690e : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec  1 05:08:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:27 : epoch 692d690e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec  1 05:08:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:27 : epoch 692d690e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  1 05:08:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:27 : epoch 692d690e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  1 05:08:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:27 : epoch 692d690e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  1 05:08:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:27 : epoch 692d690e : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec  1 05:08:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:27 : epoch 692d690e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  1 05:08:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:27 : epoch 692d690e : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec  1 05:08:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:27 : epoch 692d690e : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec  1 05:08:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:27 : epoch 692d690e : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec  1 05:08:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:27 : epoch 692d690e : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec  1 05:08:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:27 : epoch 692d690e : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec  1 05:08:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:27 : epoch 692d690e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec  1 05:08:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:27 : epoch 692d690e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec  1 05:08:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:27 : epoch 692d690e : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec  1 05:08:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:27 : epoch 692d690e : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec  1 05:08:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:27 : epoch 692d690e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec  1 05:08:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:27 : epoch 692d690e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec  1 05:08:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:27 : epoch 692d690e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec  1 05:08:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:27 : epoch 692d690e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  1 05:08:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:27 : epoch 692d690e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec  1 05:08:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:27 : epoch 692d690e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  1 05:08:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:27 : epoch 692d690e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:08:28 np0005540825 python3.9[245728]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  1 05:08:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:28 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14e0000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:28 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14d4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:28 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v524: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec  1 05:08:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:08:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:08:28.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:08:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:08:28.945Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:08:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:08:28.946Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:08:29 np0005540825 python3.9[245895]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 05:08:29 np0005540825 systemd[1]: Reloading.
Dec  1 05:08:29 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 05:08:29 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 05:08:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:29 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14bc000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:08:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:08:29.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:08:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:08:30 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:30 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14b8000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:30 np0005540825 python3.9[246083]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 05:08:30 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/100830 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  1 05:08:30 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:30 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14e0001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:30 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v525: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec  1 05:08:30 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:30 : epoch 692d690e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:08:30 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:30 : epoch 692d690e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:08:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:08:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:08:30.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:08:30 np0005540825 python3.9[246236]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 05:08:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:31 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14d4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:08:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:08:31.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:08:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:08:31] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Dec  1 05:08:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:08:31] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Dec  1 05:08:31 np0005540825 python3.9[246391]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 05:08:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:32 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14bc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:32 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14b80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:32 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v526: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.4 KiB/s wr, 4 op/s
Dec  1 05:08:32 np0005540825 python3.9[246544]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 05:08:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:08:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:08:32.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:08:33 np0005540825 python3.9[246697]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 05:08:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:33 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14e0001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:08:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:08:33.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:08:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:08:33.573Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:08:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:08:33.573Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:08:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:08:33.573Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:08:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:33 : epoch 692d690e : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  1 05:08:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:34 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14d4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:34 np0005540825 python3.9[246852]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 05:08:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:34 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14bc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:34 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v527: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Dec  1 05:08:34 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:08:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:08:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:08:34.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:08:34 np0005540825 python3.9[247005]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 05:08:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:35 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14b80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:08:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:08:35.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:08:35 np0005540825 python3.9[247159]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 05:08:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:36 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14e00089d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:36 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14d4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:36 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v528: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Dec  1 05:08:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:08:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:08:36.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:08:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:08:37.118Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:08:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:08:37.118Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:08:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:08:37.119Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:08:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/100837 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  1 05:08:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:37 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14bc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:08:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:08:37.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:08:37 np0005540825 python3.9[247314]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:08:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:38 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14b80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:38 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14e00089d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:38 np0005540825 python3.9[247467]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:08:38 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v529: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Dec  1 05:08:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:08:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:08:38.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:08:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:08:38.946Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:08:38 np0005540825 python3.9[247619]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:08:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:39 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14d4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:08:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:08:39.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_10:08:39
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', '.nfs', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', 'backups', 'images', '.mgr', 'vms', 'cephfs.cephfs.data']
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 05:08:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:08:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:08:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:08:39 np0005540825 python3.9[247773]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:08:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:08:40 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:40 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14bc002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:40 np0005540825 podman[247846]: 2025-12-01 10:08:40.241257403 +0000 UTC m=+0.092258010 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec  1 05:08:40 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:40 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14b8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:40 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v530: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Dec  1 05:08:40 np0005540825 python3.9[247944]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:08:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:08:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:08:40.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:08:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:41 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14e00096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:08:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:08:41.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:08:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:08:41] "GET /metrics HTTP/1.1" 200 48431 "" "Prometheus/2.51.0"
Dec  1 05:08:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:08:41] "GET /metrics HTTP/1.1" 200 48431 "" "Prometheus/2.51.0"
Dec  1 05:08:41 np0005540825 python3.9[248105]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:08:42 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:42 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14d4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:42 np0005540825 python3.9[248275]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:08:42 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:42 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14bc002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:42 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v531: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Dec  1 05:08:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:08:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:08:42.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:08:42 np0005540825 python3.9[248427]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:08:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:43 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14b8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:08:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:08:43.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:08:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:08:43.574Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:08:43 np0005540825 python3.9[248580]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:08:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:44 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14e00096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:44 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14d4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:44 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v532: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Dec  1 05:08:44 np0005540825 podman[248733]: 2025-12-01 10:08:44.382880947 +0000 UTC m=+0.074145512 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec  1 05:08:44 np0005540825 python3.9[248734]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:08:44 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:08:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:08:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:08:44.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:08:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:45 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14bc002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:08:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:08:45.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:08:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:46 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14b8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:46 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14e00096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:46 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v533: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s
Dec  1 05:08:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:08:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:08:46.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:08:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:08:47.119Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:08:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:47 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14d4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:08:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:08:47.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:08:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:48 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14bc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:48 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:48 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v534: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:08:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:08:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:08:48.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:08:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:08:48.947Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:08:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:49 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14d4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:08:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:08:49.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:08:49 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:08:50 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:50 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14e000a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:50 np0005540825 python3.9[248912]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Dec  1 05:08:50 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:50 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14bc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:50 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v535: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:08:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:08:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:08:50.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:08:51 np0005540825 python3.9[249065]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  1 05:08:51 np0005540825 ceph-osd[82809]: bluestore.MempoolThread fragmentation_score=0.000030 took=0.000075s
Dec  1 05:08:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:51 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:08:51] "GET /metrics HTTP/1.1" 200 48431 "" "Prometheus/2.51.0"
Dec  1 05:08:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:08:51] "GET /metrics HTTP/1.1" 200 48431 "" "Prometheus/2.51.0"
Dec  1 05:08:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:08:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:08:51.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:08:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:52 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14d4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/100852 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 05:08:52 np0005540825 python3.9[249225]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  1 05:08:52 np0005540825 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 05:08:52 np0005540825 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 05:08:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:52 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14e000a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:52 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v536: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:08:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:08:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:08:52.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:08:53 np0005540825 systemd-logind[789]: New session 55 of user zuul.
Dec  1 05:08:53 np0005540825 systemd[1]: Started Session 55 of User zuul.
Dec  1 05:08:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:53 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14bc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:53 np0005540825 systemd[1]: session-55.scope: Deactivated successfully.
Dec  1 05:08:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:08:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:08:53.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:08:53 np0005540825 systemd-logind[789]: Session 55 logged out. Waiting for processes to exit.
Dec  1 05:08:53 np0005540825 systemd-logind[789]: Removed session 55.
Dec  1 05:08:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:08:53.575Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:08:54 np0005540825 podman[249388]: 2025-12-01 10:08:54.044039356 +0000 UTC m=+0.128893299 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec  1 05:08:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:54 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14d4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:54 np0005540825 python3.9[249428]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:08:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:54 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:54 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v537: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:08:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:08:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:08:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:08:54 np0005540825 python3.9[249563]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764583733.5731668-3433-151942559935558/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:08:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:08:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:08:54.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:08:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:55 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14e000a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:08:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:08:55.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:08:55 np0005540825 python3.9[249714]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:08:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:56 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14bc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:56 np0005540825 python3.9[249791]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:08:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:56 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14d4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:56 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v538: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  1 05:08:56 np0005540825 python3.9[249941]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:08:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:08:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:08:56.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:08:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:08:57.120Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:08:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:57 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:08:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:08:57.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:08:57 np0005540825 python3.9[250063]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764583736.337352-3433-227069145816395/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:08:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:58 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14e000a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:58 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14bc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:58 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v539: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  1 05:08:58 np0005540825 python3.9[250214]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:08:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:08:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:08:58.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:08:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:08:58.948Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:08:59 np0005540825 python3.9[250335]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764583737.762247-3433-259453554612782/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:08:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:08:59 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14d4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:08:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:08:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:08:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:08:59.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:08:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:09:00 np0005540825 python3.9[250489]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:09:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:00 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:00 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14b0000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:00 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v540: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  1 05:09:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:00 : epoch 692d690e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:09:00 np0005540825 python3.9[250610]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764583739.4275184-3433-127607208447366/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:09:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:09:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:09:00.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:09:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:09:01] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Dec  1 05:09:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:09:01] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Dec  1 05:09:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:01 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14bc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:01 np0005540825 python3.9[250761]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:09:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:09:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:09:01.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:09:01 np0005540825 python3.9[250908]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764583740.8297431-3433-139164876972779/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:09:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:02 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14d0001250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:02 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:02 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v541: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec  1 05:09:02 np0005540825 python3.9[251060]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:09:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:09:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:09:02.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:09:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:03 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14b00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:09:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:09:03.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:09:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:03 : epoch 692d690e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:09:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:03 : epoch 692d690e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:09:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:09:03.577Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:09:03 np0005540825 python3.9[251213]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:09:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:04 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14bc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:04 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14d0001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:04 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v542: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec  1 05:09:04 np0005540825 python3.9[251366]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 05:09:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:09:04.562 163291 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:09:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:09:04.563 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:09:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:09:04.563 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:09:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:09:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:09:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:09:04.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:09:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:05 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:09:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:09:05.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:09:05 np0005540825 python3.9[251519]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:09:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:06 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14b00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:06 np0005540825 python3.9[251643]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764583744.9459927-3754-97060335367804/.source _original_basename=.fftosx7z follow=False checksum=a6c0c6e8488fe7f9a260d803f94979e5038c9ef6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Dec  1 05:09:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:06 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14bc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:06 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v543: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Dec  1 05:09:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:06 : epoch 692d690e : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  1 05:09:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:09:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:09:06.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:09:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:09:07.121Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:09:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:09:07.122Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:09:07 np0005540825 python3.9[251795]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 05:09:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:07 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14d0001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:07 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:07 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:09:07 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:09:07.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:09:07 np0005540825 python3.9[251949]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:09:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:08 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:08 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14b00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:08 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v544: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec  1 05:09:08 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  1 05:09:08 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:09:08 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  1 05:09:08 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:09:08 np0005540825 python3.9[252128]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764583747.4201055-3832-249494904422689/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:09:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:09:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:09:08.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:09:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:09:08.949Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:09:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:09:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:09:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 05:09:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:09:09 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v545: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1022 B/s wr, 3 op/s
Dec  1 05:09:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 05:09:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:09:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 05:09:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:09:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 05:09:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 05:09:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 05:09:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:09:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:09:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:09:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:09 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14bc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:09:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:09:09.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:09:09 np0005540825 python3.9[252304]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 05:09:09 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:09:09 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:09:09 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:09:09 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:09:09 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:09:09 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:09:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:09:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:09:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:09:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:09:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:09:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:09:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:09:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:09:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:09:10 np0005540825 podman[252516]: 2025-12-01 10:09:10.027614289 +0000 UTC m=+0.072475177 container create 595e4e39cf6cc777725a35daf035a1656d8b692dd2f123eed109ff957816185c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_swirles, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec  1 05:09:10 np0005540825 python3.9[252503]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764583748.752168-3877-229256844621597/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 05:09:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:10 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14d0001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:10 np0005540825 podman[252516]: 2025-12-01 10:09:09.996718856 +0000 UTC m=+0.041579804 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:09:10 np0005540825 systemd[1]: Started libpod-conmon-595e4e39cf6cc777725a35daf035a1656d8b692dd2f123eed109ff957816185c.scope.
Dec  1 05:09:10 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:09:10 np0005540825 podman[252516]: 2025-12-01 10:09:10.136026305 +0000 UTC m=+0.180887253 container init 595e4e39cf6cc777725a35daf035a1656d8b692dd2f123eed109ff957816185c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec  1 05:09:10 np0005540825 podman[252516]: 2025-12-01 10:09:10.144436672 +0000 UTC m=+0.189297560 container start 595e4e39cf6cc777725a35daf035a1656d8b692dd2f123eed109ff957816185c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Dec  1 05:09:10 np0005540825 pensive_swirles[252533]: 167 167
Dec  1 05:09:10 np0005540825 systemd[1]: libpod-595e4e39cf6cc777725a35daf035a1656d8b692dd2f123eed109ff957816185c.scope: Deactivated successfully.
Dec  1 05:09:10 np0005540825 conmon[252533]: conmon 595e4e39cf6cc777725a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-595e4e39cf6cc777725a35daf035a1656d8b692dd2f123eed109ff957816185c.scope/container/memory.events
Dec  1 05:09:10 np0005540825 podman[252516]: 2025-12-01 10:09:10.148696957 +0000 UTC m=+0.193557905 container attach 595e4e39cf6cc777725a35daf035a1656d8b692dd2f123eed109ff957816185c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec  1 05:09:10 np0005540825 podman[252516]: 2025-12-01 10:09:10.150166856 +0000 UTC m=+0.195027784 container died 595e4e39cf6cc777725a35daf035a1656d8b692dd2f123eed109ff957816185c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_swirles, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:09:10 np0005540825 systemd[1]: var-lib-containers-storage-overlay-862f4d7e88cb7044cf7ea7155c53440cec05f14c61c39384ddd6e4d04f995d45-merged.mount: Deactivated successfully.
Dec  1 05:09:10 np0005540825 podman[252516]: 2025-12-01 10:09:10.197495984 +0000 UTC m=+0.242356872 container remove 595e4e39cf6cc777725a35daf035a1656d8b692dd2f123eed109ff957816185c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_swirles, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:09:10 np0005540825 systemd[1]: libpod-conmon-595e4e39cf6cc777725a35daf035a1656d8b692dd2f123eed109ff957816185c.scope: Deactivated successfully.
Dec  1 05:09:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:10 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:10 np0005540825 podman[252581]: 2025-12-01 10:09:10.400788809 +0000 UTC m=+0.074210423 container create bc18592758157beda844db19fc4197c0459778b7dd2e1fdaaaa51aeea58a26b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:09:10 np0005540825 systemd[1]: Started libpod-conmon-bc18592758157beda844db19fc4197c0459778b7dd2e1fdaaaa51aeea58a26b0.scope.
Dec  1 05:09:10 np0005540825 podman[252581]: 2025-12-01 10:09:10.370269736 +0000 UTC m=+0.043691400 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:09:10 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:09:10 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe2d57f7701d889bcb6c34aa2c5997e22b46ebd5000e3d540b7c537c54a6827f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:09:10 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe2d57f7701d889bcb6c34aa2c5997e22b46ebd5000e3d540b7c537c54a6827f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:09:10 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe2d57f7701d889bcb6c34aa2c5997e22b46ebd5000e3d540b7c537c54a6827f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:09:10 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe2d57f7701d889bcb6c34aa2c5997e22b46ebd5000e3d540b7c537c54a6827f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:09:10 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe2d57f7701d889bcb6c34aa2c5997e22b46ebd5000e3d540b7c537c54a6827f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:09:10 np0005540825 podman[252581]: 2025-12-01 10:09:10.523741977 +0000 UTC m=+0.197163621 container init bc18592758157beda844db19fc4197c0459778b7dd2e1fdaaaa51aeea58a26b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_lederberg, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  1 05:09:10 np0005540825 podman[252581]: 2025-12-01 10:09:10.542138743 +0000 UTC m=+0.215560347 container start bc18592758157beda844db19fc4197c0459778b7dd2e1fdaaaa51aeea58a26b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:09:10 np0005540825 podman[252581]: 2025-12-01 10:09:10.546381207 +0000 UTC m=+0.219802811 container attach bc18592758157beda844db19fc4197c0459778b7dd2e1fdaaaa51aeea58a26b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_lederberg, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  1 05:09:10 np0005540825 podman[252595]: 2025-12-01 10:09:10.548840374 +0000 UTC m=+0.098053817 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  1 05:09:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:09:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:09:10.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:09:10 np0005540825 quirky_lederberg[252599]: --> passed data devices: 0 physical, 1 LVM
Dec  1 05:09:10 np0005540825 quirky_lederberg[252599]: --> All data devices are unavailable
Dec  1 05:09:10 np0005540825 systemd[1]: libpod-bc18592758157beda844db19fc4197c0459778b7dd2e1fdaaaa51aeea58a26b0.scope: Deactivated successfully.
Dec  1 05:09:10 np0005540825 podman[252581]: 2025-12-01 10:09:10.982533597 +0000 UTC m=+0.655955261 container died bc18592758157beda844db19fc4197c0459778b7dd2e1fdaaaa51aeea58a26b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_lederberg, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:09:11 np0005540825 systemd[1]: var-lib-containers-storage-overlay-fe2d57f7701d889bcb6c34aa2c5997e22b46ebd5000e3d540b7c537c54a6827f-merged.mount: Deactivated successfully.
Dec  1 05:09:11 np0005540825 podman[252581]: 2025-12-01 10:09:11.05116846 +0000 UTC m=+0.724590044 container remove bc18592758157beda844db19fc4197c0459778b7dd2e1fdaaaa51aeea58a26b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  1 05:09:11 np0005540825 systemd[1]: libpod-conmon-bc18592758157beda844db19fc4197c0459778b7dd2e1fdaaaa51aeea58a26b0.scope: Deactivated successfully.
Dec  1 05:09:11 np0005540825 python3.9[252766]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Dec  1 05:09:11 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v546: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Dec  1 05:09:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:09:11] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec  1 05:09:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:09:11] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec  1 05:09:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:11 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14b0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:09:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:09:11.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:09:11 np0005540825 podman[252962]: 2025-12-01 10:09:11.763320157 +0000 UTC m=+0.043866244 container create 86292db0f4f04fb31de88b2e2aa1f05e3340bc2c7479d9ad323eceb44ccb5805 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  1 05:09:11 np0005540825 systemd[1]: Started libpod-conmon-86292db0f4f04fb31de88b2e2aa1f05e3340bc2c7479d9ad323eceb44ccb5805.scope.
Dec  1 05:09:11 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:09:11 np0005540825 podman[252962]: 2025-12-01 10:09:11.74784353 +0000 UTC m=+0.028389647 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:09:11 np0005540825 podman[252962]: 2025-12-01 10:09:11.843198493 +0000 UTC m=+0.123744600 container init 86292db0f4f04fb31de88b2e2aa1f05e3340bc2c7479d9ad323eceb44ccb5805 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid)
Dec  1 05:09:11 np0005540825 podman[252962]: 2025-12-01 10:09:11.849509553 +0000 UTC m=+0.130055670 container start 86292db0f4f04fb31de88b2e2aa1f05e3340bc2c7479d9ad323eceb44ccb5805 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_poincare, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:09:11 np0005540825 podman[252962]: 2025-12-01 10:09:11.853605014 +0000 UTC m=+0.134151101 container attach 86292db0f4f04fb31de88b2e2aa1f05e3340bc2c7479d9ad323eceb44ccb5805 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_poincare, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  1 05:09:11 np0005540825 hardcore_poincare[253028]: 167 167
Dec  1 05:09:11 np0005540825 systemd[1]: libpod-86292db0f4f04fb31de88b2e2aa1f05e3340bc2c7479d9ad323eceb44ccb5805.scope: Deactivated successfully.
Dec  1 05:09:11 np0005540825 podman[252962]: 2025-12-01 10:09:11.85457714 +0000 UTC m=+0.135123227 container died 86292db0f4f04fb31de88b2e2aa1f05e3340bc2c7479d9ad323eceb44ccb5805 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec  1 05:09:11 np0005540825 systemd[1]: var-lib-containers-storage-overlay-b40c898e82d282c117e1ab02e8d6d0b0abd596da21ccf9e2c27048f0a6c71e4c-merged.mount: Deactivated successfully.
Dec  1 05:09:11 np0005540825 podman[252962]: 2025-12-01 10:09:11.900715655 +0000 UTC m=+0.181261782 container remove 86292db0f4f04fb31de88b2e2aa1f05e3340bc2c7479d9ad323eceb44ccb5805 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_poincare, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:09:11 np0005540825 systemd[1]: libpod-conmon-86292db0f4f04fb31de88b2e2aa1f05e3340bc2c7479d9ad323eceb44ccb5805.scope: Deactivated successfully.
Dec  1 05:09:12 np0005540825 python3.9[253033]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  1 05:09:12 np0005540825 podman[253053]: 2025-12-01 10:09:12.076278833 +0000 UTC m=+0.058374497 container create faf01fbcea87113b026c12b595235f8a7dd8fba0e77916bc0250d50548d84c85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_galois, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  1 05:09:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:12 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14bc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/100912 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  1 05:09:12 np0005540825 systemd[1]: Started libpod-conmon-faf01fbcea87113b026c12b595235f8a7dd8fba0e77916bc0250d50548d84c85.scope.
Dec  1 05:09:12 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:09:12 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d49cafae72318bdf98fc85fd0eb0604f80aaa50d9f4c0ee8bc06811680234bcb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:09:12 np0005540825 podman[253053]: 2025-12-01 10:09:12.055126572 +0000 UTC m=+0.037222266 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:09:12 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d49cafae72318bdf98fc85fd0eb0604f80aaa50d9f4c0ee8bc06811680234bcb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:09:12 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d49cafae72318bdf98fc85fd0eb0604f80aaa50d9f4c0ee8bc06811680234bcb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:09:12 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d49cafae72318bdf98fc85fd0eb0604f80aaa50d9f4c0ee8bc06811680234bcb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:09:12 np0005540825 podman[253053]: 2025-12-01 10:09:12.163412664 +0000 UTC m=+0.145508278 container init faf01fbcea87113b026c12b595235f8a7dd8fba0e77916bc0250d50548d84c85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_galois, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec  1 05:09:12 np0005540825 podman[253053]: 2025-12-01 10:09:12.172277833 +0000 UTC m=+0.154373447 container start faf01fbcea87113b026c12b595235f8a7dd8fba0e77916bc0250d50548d84c85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_galois, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325)
Dec  1 05:09:12 np0005540825 podman[253053]: 2025-12-01 10:09:12.175393687 +0000 UTC m=+0.157489341 container attach faf01fbcea87113b026c12b595235f8a7dd8fba0e77916bc0250d50548d84c85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_galois, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:09:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:12 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14d0003200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:12 np0005540825 competent_galois[253093]: {
Dec  1 05:09:12 np0005540825 competent_galois[253093]:    "1": [
Dec  1 05:09:12 np0005540825 competent_galois[253093]:        {
Dec  1 05:09:12 np0005540825 competent_galois[253093]:            "devices": [
Dec  1 05:09:12 np0005540825 competent_galois[253093]:                "/dev/loop3"
Dec  1 05:09:12 np0005540825 competent_galois[253093]:            ],
Dec  1 05:09:12 np0005540825 competent_galois[253093]:            "lv_name": "ceph_lv0",
Dec  1 05:09:12 np0005540825 competent_galois[253093]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:09:12 np0005540825 competent_galois[253093]:            "lv_size": "21470642176",
Dec  1 05:09:12 np0005540825 competent_galois[253093]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 05:09:12 np0005540825 competent_galois[253093]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:09:12 np0005540825 competent_galois[253093]:            "name": "ceph_lv0",
Dec  1 05:09:12 np0005540825 competent_galois[253093]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:09:12 np0005540825 competent_galois[253093]:            "tags": {
Dec  1 05:09:12 np0005540825 competent_galois[253093]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:09:12 np0005540825 competent_galois[253093]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:09:12 np0005540825 competent_galois[253093]:                "ceph.cephx_lockbox_secret": "",
Dec  1 05:09:12 np0005540825 competent_galois[253093]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 05:09:12 np0005540825 competent_galois[253093]:                "ceph.cluster_name": "ceph",
Dec  1 05:09:12 np0005540825 competent_galois[253093]:                "ceph.crush_device_class": "",
Dec  1 05:09:12 np0005540825 competent_galois[253093]:                "ceph.encrypted": "0",
Dec  1 05:09:12 np0005540825 competent_galois[253093]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 05:09:12 np0005540825 competent_galois[253093]:                "ceph.osd_id": "1",
Dec  1 05:09:12 np0005540825 competent_galois[253093]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 05:09:12 np0005540825 competent_galois[253093]:                "ceph.type": "block",
Dec  1 05:09:12 np0005540825 competent_galois[253093]:                "ceph.vdo": "0",
Dec  1 05:09:12 np0005540825 competent_galois[253093]:                "ceph.with_tpm": "0"
Dec  1 05:09:12 np0005540825 competent_galois[253093]:            },
Dec  1 05:09:12 np0005540825 competent_galois[253093]:            "type": "block",
Dec  1 05:09:12 np0005540825 competent_galois[253093]:            "vg_name": "ceph_vg0"
Dec  1 05:09:12 np0005540825 competent_galois[253093]:        }
Dec  1 05:09:12 np0005540825 competent_galois[253093]:    ]
Dec  1 05:09:12 np0005540825 competent_galois[253093]: }
Dec  1 05:09:12 np0005540825 systemd[1]: libpod-faf01fbcea87113b026c12b595235f8a7dd8fba0e77916bc0250d50548d84c85.scope: Deactivated successfully.
Dec  1 05:09:12 np0005540825 podman[253053]: 2025-12-01 10:09:12.55643238 +0000 UTC m=+0.538527994 container died faf01fbcea87113b026c12b595235f8a7dd8fba0e77916bc0250d50548d84c85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:09:12 np0005540825 systemd[1]: var-lib-containers-storage-overlay-d49cafae72318bdf98fc85fd0eb0604f80aaa50d9f4c0ee8bc06811680234bcb-merged.mount: Deactivated successfully.
Dec  1 05:09:12 np0005540825 podman[253053]: 2025-12-01 10:09:12.60089318 +0000 UTC m=+0.582988794 container remove faf01fbcea87113b026c12b595235f8a7dd8fba0e77916bc0250d50548d84c85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 05:09:12 np0005540825 systemd[1]: libpod-conmon-faf01fbcea87113b026c12b595235f8a7dd8fba0e77916bc0250d50548d84c85.scope: Deactivated successfully.
Dec  1 05:09:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:09:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:09:12.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:09:13 np0005540825 python3[253291]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Dec  1 05:09:13 np0005540825 podman[253343]: 2025-12-01 10:09:13.258777433 +0000 UTC m=+0.052031255 container create afde5695b6ea544dbd8151bac14b5b9b69536252ee8bfdc9715bbe5c9d64e5c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_goldberg, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:09:13 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v547: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 464 B/s wr, 2 op/s
Dec  1 05:09:13 np0005540825 systemd[1]: Started libpod-conmon-afde5695b6ea544dbd8151bac14b5b9b69536252ee8bfdc9715bbe5c9d64e5c6.scope.
Dec  1 05:09:13 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:09:13 np0005540825 podman[253343]: 2025-12-01 10:09:13.233765698 +0000 UTC m=+0.027019600 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:09:13 np0005540825 podman[253343]: 2025-12-01 10:09:13.335189935 +0000 UTC m=+0.128443767 container init afde5695b6ea544dbd8151bac14b5b9b69536252ee8bfdc9715bbe5c9d64e5c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec  1 05:09:13 np0005540825 podman[253343]: 2025-12-01 10:09:13.346184842 +0000 UTC m=+0.139438664 container start afde5695b6ea544dbd8151bac14b5b9b69536252ee8bfdc9715bbe5c9d64e5c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:09:13 np0005540825 podman[253343]: 2025-12-01 10:09:13.350497208 +0000 UTC m=+0.143751040 container attach afde5695b6ea544dbd8151bac14b5b9b69536252ee8bfdc9715bbe5c9d64e5c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_goldberg, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:09:13 np0005540825 eloquent_goldberg[253368]: 167 167
Dec  1 05:09:13 np0005540825 systemd[1]: libpod-afde5695b6ea544dbd8151bac14b5b9b69536252ee8bfdc9715bbe5c9d64e5c6.scope: Deactivated successfully.
Dec  1 05:09:13 np0005540825 podman[253343]: 2025-12-01 10:09:13.355152684 +0000 UTC m=+0.148406506 container died afde5695b6ea544dbd8151bac14b5b9b69536252ee8bfdc9715bbe5c9d64e5c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  1 05:09:13 np0005540825 systemd[1]: var-lib-containers-storage-overlay-668061f295bb3552c242d2eb4a17675e4b14d682bdbad314a57bfe9d5e6640dd-merged.mount: Deactivated successfully.
Dec  1 05:09:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:13 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:13 np0005540825 podman[253343]: 2025-12-01 10:09:13.397538178 +0000 UTC m=+0.190792000 container remove afde5695b6ea544dbd8151bac14b5b9b69536252ee8bfdc9715bbe5c9d64e5c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_goldberg, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  1 05:09:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:09:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:09:13.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:09:13 np0005540825 systemd[1]: libpod-conmon-afde5695b6ea544dbd8151bac14b5b9b69536252ee8bfdc9715bbe5c9d64e5c6.scope: Deactivated successfully.
Dec  1 05:09:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:09:13.578Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:09:13 np0005540825 podman[253399]: 2025-12-01 10:09:13.607285968 +0000 UTC m=+0.056132906 container create bed28c7788519869b93307c0e94645f453993d25c5d9a5b915ba410b77a39c79 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:09:13 np0005540825 systemd[1]: Started libpod-conmon-bed28c7788519869b93307c0e94645f453993d25c5d9a5b915ba410b77a39c79.scope.
Dec  1 05:09:13 np0005540825 podman[253399]: 2025-12-01 10:09:13.583792884 +0000 UTC m=+0.032639842 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:09:13 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:09:13 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b2ae6eaa6838e977ffa3602a8a1a93ba092a5e19a46dfe0a9711108a6ca535/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:09:13 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b2ae6eaa6838e977ffa3602a8a1a93ba092a5e19a46dfe0a9711108a6ca535/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:09:13 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b2ae6eaa6838e977ffa3602a8a1a93ba092a5e19a46dfe0a9711108a6ca535/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:09:13 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b2ae6eaa6838e977ffa3602a8a1a93ba092a5e19a46dfe0a9711108a6ca535/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:09:13 np0005540825 podman[253399]: 2025-12-01 10:09:13.708188701 +0000 UTC m=+0.157035639 container init bed28c7788519869b93307c0e94645f453993d25c5d9a5b915ba410b77a39c79 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_euclid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec  1 05:09:13 np0005540825 podman[253399]: 2025-12-01 10:09:13.715980231 +0000 UTC m=+0.164827189 container start bed28c7788519869b93307c0e94645f453993d25c5d9a5b915ba410b77a39c79 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:09:13 np0005540825 podman[253399]: 2025-12-01 10:09:13.721338396 +0000 UTC m=+0.170185334 container attach bed28c7788519869b93307c0e94645f453993d25c5d9a5b915ba410b77a39c79 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default)
Dec  1 05:09:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:14 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14b0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:14 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14bc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:14 np0005540825 lvm[253495]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 05:09:14 np0005540825 lvm[253495]: VG ceph_vg0 finished
Dec  1 05:09:14 np0005540825 eloquent_euclid[253416]: {}
Dec  1 05:09:14 np0005540825 systemd[1]: libpod-bed28c7788519869b93307c0e94645f453993d25c5d9a5b915ba410b77a39c79.scope: Deactivated successfully.
Dec  1 05:09:14 np0005540825 podman[253399]: 2025-12-01 10:09:14.452913267 +0000 UTC m=+0.901760215 container died bed28c7788519869b93307c0e94645f453993d25c5d9a5b915ba410b77a39c79 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:09:14 np0005540825 systemd[1]: libpod-bed28c7788519869b93307c0e94645f453993d25c5d9a5b915ba410b77a39c79.scope: Consumed 1.150s CPU time.
Dec  1 05:09:14 np0005540825 systemd[1]: var-lib-containers-storage-overlay-e8b2ae6eaa6838e977ffa3602a8a1a93ba092a5e19a46dfe0a9711108a6ca535-merged.mount: Deactivated successfully.
Dec  1 05:09:14 np0005540825 podman[253399]: 2025-12-01 10:09:14.503559643 +0000 UTC m=+0.952406561 container remove bed28c7788519869b93307c0e94645f453993d25c5d9a5b915ba410b77a39c79 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_euclid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:09:14 np0005540825 podman[253497]: 2025-12-01 10:09:14.510016958 +0000 UTC m=+0.081595993 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  1 05:09:14 np0005540825 systemd[1]: libpod-conmon-bed28c7788519869b93307c0e94645f453993d25c5d9a5b915ba410b77a39c79.scope: Deactivated successfully.
Dec  1 05:09:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 05:09:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:09:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 05:09:14 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Dec  1 05:09:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:09:14.569387) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  1 05:09:14 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Dec  1 05:09:14 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764583754569456, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 1300, "num_deletes": 251, "total_data_size": 2461139, "memory_usage": 2497432, "flush_reason": "Manual Compaction"}
Dec  1 05:09:14 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Dec  1 05:09:14 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764583754585123, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 2398176, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18796, "largest_seqno": 20095, "table_properties": {"data_size": 2391991, "index_size": 3448, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12805, "raw_average_key_size": 19, "raw_value_size": 2379753, "raw_average_value_size": 3683, "num_data_blocks": 152, "num_entries": 646, "num_filter_entries": 646, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764583635, "oldest_key_time": 1764583635, "file_creation_time": 1764583754, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Dec  1 05:09:14 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 15770 microseconds, and 6320 cpu microseconds.
Dec  1 05:09:14 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 05:09:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:09:14.585167) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 2398176 bytes OK
Dec  1 05:09:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:09:14.585194) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Dec  1 05:09:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:09:14.586578) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Dec  1 05:09:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:09:14.586595) EVENT_LOG_v1 {"time_micros": 1764583754586591, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  1 05:09:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:09:14.586613) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  1 05:09:14 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 2455508, prev total WAL file size 2491133, number of live WAL files 2.
Dec  1 05:09:14 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:09:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:09:14.587383) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Dec  1 05:09:14 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  1 05:09:14 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(2341KB)], [41(12MB)]
Dec  1 05:09:14 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764583754587437, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 15809253, "oldest_snapshot_seqno": -1}
Dec  1 05:09:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:09:14 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 5030 keys, 13610952 bytes, temperature: kUnknown
Dec  1 05:09:14 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:09:14 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764583754662478, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 13610952, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13576004, "index_size": 21270, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12613, "raw_key_size": 128121, "raw_average_key_size": 25, "raw_value_size": 13483223, "raw_average_value_size": 2680, "num_data_blocks": 874, "num_entries": 5030, "num_filter_entries": 5030, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764582410, "oldest_key_time": 0, "file_creation_time": 1764583754, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Dec  1 05:09:14 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 05:09:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:09:14.662662) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 13610952 bytes
Dec  1 05:09:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:09:14.664211) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 210.5 rd, 181.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 12.8 +0.0 blob) out(13.0 +0.0 blob), read-write-amplify(12.3) write-amplify(5.7) OK, records in: 5550, records dropped: 520 output_compression: NoCompression
Dec  1 05:09:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:09:14.664225) EVENT_LOG_v1 {"time_micros": 1764583754664219, "job": 20, "event": "compaction_finished", "compaction_time_micros": 75107, "compaction_time_cpu_micros": 30113, "output_level": 6, "num_output_files": 1, "total_output_size": 13610952, "num_input_records": 5550, "num_output_records": 5030, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  1 05:09:14 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:09:14 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764583754664737, "job": 20, "event": "table_file_deletion", "file_number": 43}
Dec  1 05:09:14 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:09:14 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764583754666952, "job": 20, "event": "table_file_deletion", "file_number": 41}
Dec  1 05:09:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:09:14.587294) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:09:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:09:14.666995) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:09:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:09:14.667000) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:09:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:09:14.667002) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:09:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:09:14.667003) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:09:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:09:14.667004) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:09:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:09:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:09:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:09:14.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:09:15 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v548: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 464 B/s wr, 2 op/s
Dec  1 05:09:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:15 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14d0003200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:09:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:09:15.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:09:15 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:09:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:16 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:16 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:09:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:09:16.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:09:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:09:17.130Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:09:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:09:17.131Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:09:17 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v549: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 278 B/s rd, 92 B/s wr, 0 op/s
Dec  1 05:09:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:17 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14bc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:09:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:09:17.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:09:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:18 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14d0003200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:18 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:09:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:09:18.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:09:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:09:18.950Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:09:19 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v550: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 278 B/s rd, 92 B/s wr, 0 op/s
Dec  1 05:09:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:19 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:09:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:09:19.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:09:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:09:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:20 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14ac000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:20 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14d0004300 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:09:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:09:20.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:09:21 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v551: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec  1 05:09:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:09:21] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec  1 05:09:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:09:21] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec  1 05:09:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:21 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14bc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:09:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:09:21.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:09:22 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:22 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:22 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:22 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14ac0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:09:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:09:22.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:09:23 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v552: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:09:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:23 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14d0004300 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:09:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:09:23.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:09:23 np0005540825 podman[253342]: 2025-12-01 10:09:23.528844174 +0000 UTC m=+10.332776184 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec  1 05:09:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:09:23.579Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:09:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:09:23.579Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:09:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:09:23.579Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:09:23 np0005540825 podman[253654]: 2025-12-01 10:09:23.731165624 +0000 UTC m=+0.076661450 container create ca896e0bfad55626a9f81954231a858c2995f203dd2970ce570c6b9c7bcc1d26 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, container_name=nova_compute_init, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 05:09:23 np0005540825 podman[253654]: 2025-12-01 10:09:23.681046001 +0000 UTC m=+0.026541857 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec  1 05:09:23 np0005540825 python3[253291]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Dec  1 05:09:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:24 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14bc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:24 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:24 np0005540825 podman[253717]: 2025-12-01 10:09:24.286763377 +0000 UTC m=+0.144878150 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:09:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:09:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:09:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:09:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:09:24.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:09:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:09:25 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v553: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:09:25 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:25 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14ac0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:25 np0005540825 python3.9[253871]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 05:09:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:09:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:09:25.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:09:26 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:26 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14d0004300 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:26 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:26 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14d0004300 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:26 np0005540825 python3.9[254026]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Dec  1 05:09:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:09:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:09:26.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:09:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:09:27.132Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:09:27 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v554: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:09:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:27 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14b8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:09:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:09:27.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:09:27 np0005540825 python3.9[254179]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  1 05:09:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:28 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14ac0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:28 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14d0004300 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:28 np0005540825 python3[254332]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec  1 05:09:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:09:28.951Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:09:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:09:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:09:28.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:09:28 np0005540825 podman[254371]: 2025-12-01 10:09:28.979454072 +0000 UTC m=+0.078434747 container create 09bb02350eb17b03ab54ddb939d2b1a808bc64b77f09d5116bd141e5ccbf5742 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 05:09:28 np0005540825 podman[254371]: 2025-12-01 10:09:28.942060333 +0000 UTC m=+0.041041018 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec  1 05:09:28 np0005540825 python3[254332]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Dec  1 05:09:29 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v555: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:09:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:29 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14bc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:09:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:09:29.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:09:29 np0005540825 python3.9[254562]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 05:09:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:09:30 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:30 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14b8003c30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:30 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:30 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14ac002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:09:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:09:30.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:09:31 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v556: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:09:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:09:31] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec  1 05:09:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:09:31] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec  1 05:09:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:31 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14d0004300 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:09:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:09:31.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:09:31 np0005540825 python3.9[254717]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:09:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:32 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14bc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:32 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14bc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:32 np0005540825 python3.9[254869]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764583771.8452783-4153-224214099212077/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 05:09:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:09:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:09:32.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:09:33 np0005540825 python3.9[254945]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 05:09:33 np0005540825 systemd[1]: Reloading.
Dec  1 05:09:33 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 05:09:33 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 05:09:33 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v557: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:09:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:33 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14ac002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:09:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:09:33.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:09:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:09:33.580Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:09:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:34 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14d0004300 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:34 np0005540825 python3.9[255058]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 05:09:34 np0005540825 systemd[1]: Reloading.
Dec  1 05:09:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:34 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14b8003c70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:34 np0005540825 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 05:09:34 np0005540825 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 05:09:34 np0005540825 systemd[1]: Starting nova_compute container...
Dec  1 05:09:34 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:09:34 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87da5fbaf8eca9472d533ac968b5fd1e135728ba235fb2a71f3271f4b255806c/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  1 05:09:34 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87da5fbaf8eca9472d533ac968b5fd1e135728ba235fb2a71f3271f4b255806c/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec  1 05:09:34 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87da5fbaf8eca9472d533ac968b5fd1e135728ba235fb2a71f3271f4b255806c/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec  1 05:09:34 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87da5fbaf8eca9472d533ac968b5fd1e135728ba235fb2a71f3271f4b255806c/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  1 05:09:34 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87da5fbaf8eca9472d533ac968b5fd1e135728ba235fb2a71f3271f4b255806c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  1 05:09:34 np0005540825 podman[255098]: 2025-12-01 10:09:34.834717819 +0000 UTC m=+0.133672648 container init 09bb02350eb17b03ab54ddb939d2b1a808bc64b77f09d5116bd141e5ccbf5742 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  1 05:09:34 np0005540825 podman[255098]: 2025-12-01 10:09:34.846536418 +0000 UTC m=+0.145491187 container start 09bb02350eb17b03ab54ddb939d2b1a808bc64b77f09d5116bd141e5ccbf5742 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  1 05:09:34 np0005540825 podman[255098]: nova_compute
Dec  1 05:09:34 np0005540825 nova_compute[255113]: + sudo -E kolla_set_configs
Dec  1 05:09:34 np0005540825 systemd[1]: Started nova_compute container.
Dec  1 05:09:34 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:09:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:09:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:09:34.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:09:34 np0005540825 nova_compute[255113]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  1 05:09:34 np0005540825 nova_compute[255113]: INFO:__main__:Validating config file
Dec  1 05:09:34 np0005540825 nova_compute[255113]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  1 05:09:34 np0005540825 nova_compute[255113]: INFO:__main__:Copying service configuration files
Dec  1 05:09:34 np0005540825 nova_compute[255113]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec  1 05:09:34 np0005540825 nova_compute[255113]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec  1 05:09:34 np0005540825 nova_compute[255113]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec  1 05:09:34 np0005540825 nova_compute[255113]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec  1 05:09:34 np0005540825 nova_compute[255113]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec  1 05:09:34 np0005540825 nova_compute[255113]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  1 05:09:34 np0005540825 nova_compute[255113]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  1 05:09:34 np0005540825 nova_compute[255113]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  1 05:09:34 np0005540825 nova_compute[255113]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  1 05:09:34 np0005540825 nova_compute[255113]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec  1 05:09:34 np0005540825 nova_compute[255113]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec  1 05:09:34 np0005540825 nova_compute[255113]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  1 05:09:34 np0005540825 nova_compute[255113]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  1 05:09:34 np0005540825 nova_compute[255113]: INFO:__main__:Deleting /etc/ceph
Dec  1 05:09:34 np0005540825 nova_compute[255113]: INFO:__main__:Creating directory /etc/ceph
Dec  1 05:09:34 np0005540825 nova_compute[255113]: INFO:__main__:Setting permission for /etc/ceph
Dec  1 05:09:34 np0005540825 nova_compute[255113]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec  1 05:09:34 np0005540825 nova_compute[255113]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec  1 05:09:34 np0005540825 nova_compute[255113]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec  1 05:09:34 np0005540825 nova_compute[255113]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec  1 05:09:34 np0005540825 nova_compute[255113]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec  1 05:09:34 np0005540825 nova_compute[255113]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  1 05:09:34 np0005540825 nova_compute[255113]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec  1 05:09:35 np0005540825 nova_compute[255113]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  1 05:09:35 np0005540825 nova_compute[255113]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec  1 05:09:35 np0005540825 nova_compute[255113]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec  1 05:09:35 np0005540825 nova_compute[255113]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec  1 05:09:35 np0005540825 nova_compute[255113]: INFO:__main__:Writing out command to execute
Dec  1 05:09:35 np0005540825 nova_compute[255113]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec  1 05:09:35 np0005540825 nova_compute[255113]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec  1 05:09:35 np0005540825 nova_compute[255113]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec  1 05:09:35 np0005540825 nova_compute[255113]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  1 05:09:35 np0005540825 nova_compute[255113]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  1 05:09:35 np0005540825 nova_compute[255113]: ++ cat /run_command
Dec  1 05:09:35 np0005540825 nova_compute[255113]: + CMD=nova-compute
Dec  1 05:09:35 np0005540825 nova_compute[255113]: + ARGS=
Dec  1 05:09:35 np0005540825 nova_compute[255113]: + sudo kolla_copy_cacerts
Dec  1 05:09:35 np0005540825 nova_compute[255113]: + [[ ! -n '' ]]
Dec  1 05:09:35 np0005540825 nova_compute[255113]: + . kolla_extend_start
Dec  1 05:09:35 np0005540825 nova_compute[255113]: Running command: 'nova-compute'
Dec  1 05:09:35 np0005540825 nova_compute[255113]: + echo 'Running command: '\''nova-compute'\'''
Dec  1 05:09:35 np0005540825 nova_compute[255113]: + umask 0022
Dec  1 05:09:35 np0005540825 nova_compute[255113]: + exec nova-compute
Dec  1 05:09:35 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v558: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:09:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:35 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14b8003c70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:09:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:09:35.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:09:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:36 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14b8003c70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:36 np0005540825 python3.9[255279]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 05:09:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:36 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14e0001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:09:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:09:36.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:09:37 np0005540825 python3.9[255429]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 05:09:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:09:37.143Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:09:37 np0005540825 nova_compute[255113]: 2025-12-01 10:09:37.167 255117 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  1 05:09:37 np0005540825 nova_compute[255113]: 2025-12-01 10:09:37.168 255117 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  1 05:09:37 np0005540825 nova_compute[255113]: 2025-12-01 10:09:37.168 255117 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  1 05:09:37 np0005540825 nova_compute[255113]: 2025-12-01 10:09:37.168 255117 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Dec  1 05:09:37 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v559: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:09:37 np0005540825 nova_compute[255113]: 2025-12-01 10:09:37.310 255117 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:09:37 np0005540825 nova_compute[255113]: 2025-12-01 10:09:37.337 255117 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:09:37 np0005540825 nova_compute[255113]: 2025-12-01 10:09:37.338 255117 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Dec  1 05:09:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:37 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14a4000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:09:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:09:37.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:09:37 np0005540825 nova_compute[255113]: 2025-12-01 10:09:37.828 255117 INFO nova.virt.driver [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Dec  1 05:09:37 np0005540825 nova_compute[255113]: 2025-12-01 10:09:37.992 255117 INFO nova.compute.provider_config [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Dec  1 05:09:38 np0005540825 python3.9[255585]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 05:09:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:38 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14b8003c70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.121 255117 DEBUG oslo_concurrency.lockutils [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.122 255117 DEBUG oslo_concurrency.lockutils [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.122 255117 DEBUG oslo_concurrency.lockutils [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.123 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.123 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.123 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.123 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.124 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.124 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.124 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.125 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.125 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.125 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.126 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.126 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.126 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.127 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.127 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.127 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.128 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.128 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.128 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.129 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.129 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.129 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.129 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.130 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.130 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.130 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.131 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.131 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.131 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.132 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.132 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.132 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.133 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.133 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.133 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.134 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.134 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.134 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.135 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.135 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.135 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.136 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.136 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.137 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.137 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.137 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.138 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.138 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.138 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.138 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.139 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.139 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.139 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.140 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.140 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.140 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.141 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.141 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.141 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.142 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.142 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.142 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.143 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.143 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.143 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.143 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.144 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.144 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.145 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.145 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.146 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.146 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.146 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.147 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.147 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.147 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.148 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.148 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.148 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.149 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.149 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.149 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.150 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.150 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.150 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.151 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.151 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.151 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.152 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.152 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.152 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.153 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.153 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.153 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.154 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.154 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.154 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.155 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.155 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.155 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.156 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.156 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.156 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.156 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.157 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.157 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.157 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.158 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.158 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.158 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.159 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.159 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.160 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.160 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.160 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.161 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.161 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.161 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.162 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.162 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.162 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.162 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.163 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.163 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.163 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.163 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.163 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.164 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.164 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.164 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.164 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.164 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.165 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.165 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.165 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.165 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.166 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.166 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.166 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.166 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.166 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.167 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.167 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.167 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.167 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.167 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.168 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.168 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.168 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.168 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.169 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.169 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.169 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.169 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.170 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.170 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.170 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.171 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.171 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.171 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.171 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.171 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.172 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.172 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.172 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.172 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.173 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.173 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.173 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.173 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.174 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.174 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.174 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.174 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.174 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.175 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.175 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.175 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.175 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.175 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.176 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.176 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.176 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.176 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.176 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.177 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.177 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.177 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.177 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.177 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.178 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.178 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.178 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.178 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.178 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.179 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.179 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.179 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.179 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.179 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.180 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.180 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.180 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.180 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.180 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.181 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.181 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.181 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.181 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.181 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.182 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.182 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.182 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.182 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.182 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.183 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.183 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.183 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.183 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.183 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.184 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.184 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.184 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.184 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.185 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.185 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.185 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.185 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.185 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.186 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.186 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.186 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.186 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.187 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.187 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.187 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.187 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.187 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.188 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.188 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.188 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.188 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.189 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.189 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.189 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.189 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.189 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.190 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.190 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.190 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.190 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.191 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.191 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.191 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.191 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.191 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.192 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.192 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.192 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.192 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.193 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.193 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.193 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.193 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.194 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.194 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.194 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.194 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.194 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.195 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.195 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.195 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.195 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.195 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.196 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.196 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.196 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.196 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.197 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.197 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.197 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.197 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.198 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.198 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.198 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.198 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.198 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.198 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.199 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.199 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.199 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.199 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.199 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.199 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.199 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.200 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.200 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.200 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.200 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.200 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.200 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.200 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.201 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.201 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.201 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.201 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.201 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.201 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.202 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.202 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.202 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.202 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.202 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.202 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.202 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.202 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.203 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.203 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.203 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.203 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.203 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.203 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.204 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.204 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.204 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.204 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.204 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.204 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.204 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.205 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.205 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.205 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.205 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.205 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.205 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.205 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.206 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.206 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.206 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.206 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.206 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.206 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.206 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.207 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.207 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.207 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.207 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.207 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.207 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.207 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.208 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.208 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.208 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.208 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.208 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.209 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.209 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.209 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.209 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.209 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.209 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.209 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.210 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.210 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.210 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.210 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.210 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.210 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.211 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.211 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.211 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.211 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.211 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.211 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.212 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.212 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.212 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.212 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.212 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.212 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.212 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.212 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.213 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.213 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.213 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.213 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.213 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.213 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.214 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.214 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.214 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.214 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.214 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.214 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.214 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.215 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.215 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.215 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.215 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.215 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.215 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.215 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.216 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.216 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.216 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.216 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.216 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.216 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.217 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.217 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.217 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.217 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.217 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.217 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.217 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.217 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.218 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.218 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.218 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.218 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.218 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.218 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.219 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.219 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.219 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.219 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.219 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.219 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.220 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.220 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.220 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.220 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.220 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.220 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.220 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.221 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.221 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.221 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.221 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.221 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.221 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.221 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.222 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.222 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.222 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.222 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.222 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.222 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.223 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.223 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.223 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.223 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.223 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.223 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.223 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.224 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.224 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.224 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.224 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.224 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.224 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.224 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.225 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.225 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.225 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.225 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.225 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.225 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.225 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.226 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.226 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.226 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.226 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.226 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.226 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.226 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.227 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.227 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.227 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.227 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.227 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.227 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.227 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.228 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.228 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.228 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.228 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.228 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.228 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.228 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.229 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.229 255117 WARNING oslo_config.cfg [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec  1 05:09:38 np0005540825 nova_compute[255113]: live_migration_uri is deprecated for removal in favor of two other options that
Dec  1 05:09:38 np0005540825 nova_compute[255113]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec  1 05:09:38 np0005540825 nova_compute[255113]: and ``live_migration_inbound_addr`` respectively.
Dec  1 05:09:38 np0005540825 nova_compute[255113]: ).  Its value may be silently ignored in the future.#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.229 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.229 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.229 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.229 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.230 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.230 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.230 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.230 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.230 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.230 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.230 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.231 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.231 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.231 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.231 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.231 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.231 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.231 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.232 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.rbd_secret_uuid        = 365f19c2-81e5-5edd-b6b4-280555214d3a log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.232 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.232 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.232 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.232 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.232 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.232 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.233 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.233 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.233 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.233 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.233 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.233 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.233 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.234 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.234 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.234 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.234 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.234 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.234 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.234 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.235 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.235 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.235 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.235 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.235 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.235 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.235 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.236 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.236 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.236 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.236 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.236 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.236 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.236 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.237 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.237 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.237 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.237 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.237 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.237 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.237 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.238 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.238 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.238 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.238 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.238 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.238 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.238 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.239 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.239 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.239 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.239 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.239 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.239 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.239 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.239 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.240 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.240 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.240 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.240 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.240 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.240 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.240 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.241 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.241 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.241 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.241 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.241 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.241 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.241 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.242 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.242 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.242 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.242 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.242 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.242 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.242 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.243 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.243 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.243 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.243 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.243 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.243 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.243 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.244 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.244 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.244 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.244 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.244 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.244 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.244 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.245 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.245 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.245 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.245 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.245 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.245 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.245 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.246 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.246 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.246 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.246 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.246 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.246 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.246 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.247 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.247 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.247 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.247 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.247 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.247 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.247 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.248 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.248 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.248 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.248 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.248 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.248 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.248 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.249 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.249 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.249 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.249 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.249 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.249 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.250 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.250 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.250 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.250 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.250 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.250 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.250 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.250 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.251 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.251 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.251 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.251 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.251 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.251 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.251 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.252 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.252 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.252 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.252 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.252 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.252 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.252 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.253 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.253 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.253 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.253 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.253 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.253 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.253 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.254 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.254 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.254 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.254 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.254 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.254 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.254 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.255 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.255 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.255 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.255 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.255 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.255 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.255 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.256 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.256 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.256 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.256 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.256 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.256 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.256 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.257 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.257 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.257 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.257 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.257 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.258 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.258 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.258 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.258 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.258 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.258 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.258 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.258 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.259 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.259 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.259 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.259 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.259 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.259 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.259 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.260 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.260 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.260 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.260 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.260 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.260 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.260 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.261 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.261 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.261 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.261 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.261 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.261 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.261 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.262 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.262 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.262 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.262 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.262 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.263 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.263 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.263 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.263 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.263 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.263 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.263 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.263 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.264 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.264 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.264 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.264 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.264 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.264 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.265 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.265 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.265 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.265 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.265 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.265 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.266 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.266 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.266 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.266 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.266 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.266 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.266 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.267 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.267 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.267 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.267 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.267 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.267 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.268 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.268 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.268 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.268 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.268 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.268 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.268 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.269 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.269 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.269 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.269 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.269 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.269 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.269 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.270 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.270 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.270 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.270 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.270 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.270 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.270 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.271 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.271 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.271 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.271 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.271 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.271 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.271 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.272 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.272 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.272 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.272 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.272 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.272 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.273 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.273 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.273 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.273 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.273 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.273 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.274 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.274 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.274 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.274 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.274 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.274 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.275 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.275 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.275 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.275 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.275 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 kernel: ganesha.nfsd[245744]: segfault at 50 ip 00007f15889ae32e sp 00007f153cff8210 error 4 in libntirpc.so.5.8[7f1588993000+2c000] likely on CPU 1 (core 0, socket 1)
Dec  1 05:09:38 np0005540825 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.276 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.276 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.276 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.276 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[243059]: 01/12/2025 10:09:38 : epoch 692d690e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f14b8003c70 fd 39 proxy ignored for local
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.276 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.278 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.278 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.279 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.279 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.280 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.280 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.280 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.281 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.281 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.281 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.282 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.282 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.283 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.283 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.283 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.284 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.284 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.284 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.285 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.285 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.286 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.286 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.287 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.287 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.287 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.288 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.288 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.289 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.289 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.289 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.290 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.290 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.290 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.291 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.291 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.291 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.292 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.292 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.292 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.293 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.293 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.293 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.294 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.294 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.294 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.295 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.295 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.295 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.296 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.296 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.296 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.297 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.297 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.297 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.298 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.298 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.298 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.299 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.299 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.299 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.300 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.300 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.300 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.301 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.301 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.302 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.302 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.302 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.303 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.303 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.303 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.303 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.304 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.304 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.305 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.305 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.306 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.306 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.307 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.307 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.307 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.308 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.308 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.308 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.309 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.309 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.310 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.310 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.310 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.311 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.311 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.312 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 systemd[1]: Started Process Core Dump (PID 255611/UID 0).
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.312 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.313 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.313 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.313 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.314 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.314 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.314 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.315 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.315 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.315 255117 DEBUG oslo_service.service [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.317 255117 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.333 255117 DEBUG nova.virt.libvirt.host [None req-8d76a14b-33fa-4859-ae95-26b12299b48d - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.334 255117 DEBUG nova.virt.libvirt.host [None req-8d76a14b-33fa-4859-ae95-26b12299b48d - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.335 255117 DEBUG nova.virt.libvirt.host [None req-8d76a14b-33fa-4859-ae95-26b12299b48d - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.336 255117 DEBUG nova.virt.libvirt.host [None req-8d76a14b-33fa-4859-ae95-26b12299b48d - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Dec  1 05:09:38 np0005540825 systemd[1]: Starting libvirt QEMU daemon...
Dec  1 05:09:38 np0005540825 systemd[1]: Started libvirt QEMU daemon.
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.424 255117 DEBUG nova.virt.libvirt.host [None req-8d76a14b-33fa-4859-ae95-26b12299b48d - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f745f745460> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.428 255117 DEBUG nova.virt.libvirt.host [None req-8d76a14b-33fa-4859-ae95-26b12299b48d - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f745f745460> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.429 255117 INFO nova.virt.libvirt.driver [None req-8d76a14b-33fa-4859-ae95-26b12299b48d - - - - - -] Connection event '1' reason 'None'#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.642 255117 WARNING nova.virt.libvirt.driver [None req-8d76a14b-33fa-4859-ae95-26b12299b48d - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec  1 05:09:38 np0005540825 nova_compute[255113]: 2025-12-01 10:09:38.643 255117 DEBUG nova.virt.libvirt.volume.mount [None req-8d76a14b-33fa-4859-ae95-26b12299b48d - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Dec  1 05:09:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:09:38.953Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:09:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:09:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:09:38.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:09:39 np0005540825 python3.9[255791]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec  1 05:09:39 np0005540825 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 05:09:39 np0005540825 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v560: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:09:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:09:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:09:39.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_10:09:39
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['default.rgw.control', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'backups', '.mgr', 'default.rgw.log', '.nfs', 'images', 'cephfs.cephfs.data']
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 05:09:39 np0005540825 nova_compute[255113]: 2025-12-01 10:09:39.506 255117 INFO nova.virt.libvirt.host [None req-8d76a14b-33fa-4859-ae95-26b12299b48d - - - - - -] Libvirt host capabilities <capabilities>
Dec  1 05:09:39 np0005540825 nova_compute[255113]: 
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <host>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <uuid>4cd03307-de0c-4b81-bfb4-f23408ecf241</uuid>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <cpu>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <arch>x86_64</arch>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model>EPYC-Rome-v4</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <vendor>AMD</vendor>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <microcode version='16777317'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <signature family='23' model='49' stepping='0'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <maxphysaddr mode='emulate' bits='40'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature name='x2apic'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature name='tsc-deadline'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature name='osxsave'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature name='hypervisor'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature name='tsc_adjust'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature name='spec-ctrl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature name='stibp'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature name='arch-capabilities'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature name='ssbd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature name='cmp_legacy'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature name='topoext'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature name='virt-ssbd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature name='lbrv'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature name='tsc-scale'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature name='vmcb-clean'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature name='pause-filter'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature name='pfthreshold'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature name='svme-addr-chk'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature name='rdctl-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature name='skip-l1dfl-vmentry'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature name='mds-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature name='pschange-mc-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <pages unit='KiB' size='4'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <pages unit='KiB' size='2048'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <pages unit='KiB' size='1048576'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </cpu>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <power_management>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <suspend_mem/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </power_management>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <iommu support='no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <migration_features>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <live/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <uri_transports>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <uri_transport>tcp</uri_transport>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <uri_transport>rdma</uri_transport>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </uri_transports>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </migration_features>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <topology>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <cells num='1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <cell id='0'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:          <memory unit='KiB'>7864324</memory>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:          <pages unit='KiB' size='4'>1966081</pages>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:          <pages unit='KiB' size='2048'>0</pages>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:          <pages unit='KiB' size='1048576'>0</pages>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:          <distances>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:            <sibling id='0' value='10'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:          </distances>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:          <cpus num='8'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:          </cpus>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        </cell>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </cells>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </topology>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <cache>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </cache>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <secmodel>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model>selinux</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <doi>0</doi>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </secmodel>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <secmodel>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model>dac</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <doi>0</doi>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <baselabel type='kvm'>+107:+107</baselabel>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <baselabel type='qemu'>+107:+107</baselabel>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </secmodel>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  </host>
Dec  1 05:09:39 np0005540825 nova_compute[255113]: 
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <guest>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <os_type>hvm</os_type>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <arch name='i686'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <wordsize>32</wordsize>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <domain type='qemu'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <domain type='kvm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </arch>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <features>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <pae/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <nonpae/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <acpi default='on' toggle='yes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <apic default='on' toggle='no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <cpuselection/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <deviceboot/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <disksnapshot default='on' toggle='no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <externalSnapshot/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </features>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  </guest>
Dec  1 05:09:39 np0005540825 nova_compute[255113]: 
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <guest>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <os_type>hvm</os_type>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <arch name='x86_64'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <wordsize>64</wordsize>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <domain type='qemu'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <domain type='kvm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </arch>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <features>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <acpi default='on' toggle='yes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <apic default='on' toggle='no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <cpuselection/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <deviceboot/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <disksnapshot default='on' toggle='no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <externalSnapshot/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </features>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  </guest>
Dec  1 05:09:39 np0005540825 nova_compute[255113]: 
Dec  1 05:09:39 np0005540825 nova_compute[255113]: </capabilities>
Dec  1 05:09:39 np0005540825 nova_compute[255113]: #033[00m
Dec  1 05:09:39 np0005540825 nova_compute[255113]: 2025-12-01 10:09:39.518 255117 DEBUG nova.virt.libvirt.host [None req-8d76a14b-33fa-4859-ae95-26b12299b48d - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  1 05:09:39 np0005540825 nova_compute[255113]: 2025-12-01 10:09:39.540 255117 DEBUG nova.virt.libvirt.host [None req-8d76a14b-33fa-4859-ae95-26b12299b48d - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec  1 05:09:39 np0005540825 nova_compute[255113]: <domainCapabilities>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <path>/usr/libexec/qemu-kvm</path>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <domain>kvm</domain>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <arch>i686</arch>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <vcpu max='4096'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <iothreads supported='yes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <os supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <enum name='firmware'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <loader supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='type'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>rom</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>pflash</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='readonly'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>yes</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>no</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='secure'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>no</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </loader>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  </os>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <cpu>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <mode name='host-passthrough' supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='hostPassthroughMigratable'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>on</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>off</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </mode>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <mode name='maximum' supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='maximumMigratable'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>on</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>off</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </mode>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <mode name='host-model' supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <vendor>AMD</vendor>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='x2apic'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='tsc-deadline'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='hypervisor'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='tsc_adjust'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='spec-ctrl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='stibp'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='ssbd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='cmp_legacy'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='overflow-recov'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='succor'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='ibrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='amd-ssbd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='virt-ssbd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='lbrv'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='tsc-scale'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='vmcb-clean'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='flushbyasid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='pause-filter'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='pfthreshold'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='svme-addr-chk'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='disable' name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </mode>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <mode name='custom' supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Broadwell'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Broadwell-IBRS'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Broadwell-noTSX'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Broadwell-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Broadwell-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Broadwell-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Broadwell-v4'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Cascadelake-Server'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Cascadelake-Server-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Cascadelake-Server-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Cascadelake-Server-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Cascadelake-Server-v4'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Cascadelake-Server-v5'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Cooperlake'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Cooperlake-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Cooperlake-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Denverton'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='mpx'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Denverton-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='mpx'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Denverton-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Denverton-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Dhyana-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-Genoa'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amd-psfd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='auto-ibrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='no-nested-data-bp'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='null-sel-clr-base'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='stibp-always-on'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-Genoa-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amd-psfd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='auto-ibrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='no-nested-data-bp'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='null-sel-clr-base'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='stibp-always-on'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-Milan'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-Milan-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-Milan-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amd-psfd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='no-nested-data-bp'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='null-sel-clr-base'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='stibp-always-on'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-Rome'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-Rome-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-Rome-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-Rome-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-v4'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='GraniteRapids'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-fp16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-int8'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-tile'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-fp16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fbsdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrc'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fzrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='mcdt-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pbrsb-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='prefetchiti'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='psdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='sbdr-ssdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='serialize'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='tsx-ldtrk'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xfd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='GraniteRapids-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-fp16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-int8'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-tile'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-fp16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fbsdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrc'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fzrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='mcdt-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pbrsb-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='prefetchiti'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='psdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='sbdr-ssdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='serialize'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='tsx-ldtrk'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xfd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='GraniteRapids-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-fp16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-int8'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-tile'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx10'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx10-128'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx10-256'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx10-512'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-fp16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='cldemote'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fbsdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrc'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fzrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='mcdt-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdir64b'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdiri'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pbrsb-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='prefetchiti'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='psdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='sbdr-ssdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='serialize'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ss'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='tsx-ldtrk'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xfd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Haswell'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Haswell-IBRS'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Haswell-noTSX'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Haswell-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Haswell-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Haswell-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Haswell-v4'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Icelake-Server'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Icelake-Server-noTSX'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Icelake-Server-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Icelake-Server-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Icelake-Server-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Icelake-Server-v4'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Icelake-Server-v5'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Icelake-Server-v6'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Icelake-Server-v7'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='IvyBridge'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='IvyBridge-IBRS'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='IvyBridge-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='IvyBridge-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='KnightsMill'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-4fmaps'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-4vnniw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512er'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512pf'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ss'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='KnightsMill-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-4fmaps'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-4vnniw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512er'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512pf'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ss'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Opteron_G4'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fma4'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xop'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Opteron_G4-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fma4'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xop'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Opteron_G5'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fma4'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='tbm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xop'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Opteron_G5-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fma4'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='tbm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xop'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='SapphireRapids'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-int8'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-tile'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-fp16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrc'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fzrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='serialize'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='tsx-ldtrk'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xfd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='SapphireRapids-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-int8'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-tile'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-fp16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrc'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fzrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='serialize'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='tsx-ldtrk'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xfd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='SapphireRapids-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-int8'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-tile'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-fp16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fbsdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrc'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fzrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='psdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='sbdr-ssdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='serialize'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='tsx-ldtrk'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xfd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='SapphireRapids-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-int8'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-tile'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-fp16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='cldemote'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fbsdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrc'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fzrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdir64b'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdiri'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='psdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='sbdr-ssdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='serialize'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ss'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='tsx-ldtrk'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xfd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='SierraForest'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-ne-convert'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni-int8'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='cmpccxadd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fbsdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='mcdt-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pbrsb-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='psdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='sbdr-ssdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='serialize'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='SierraForest-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-ne-convert'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni-int8'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='cmpccxadd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fbsdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrs'/>
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='mcdt-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pbrsb-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='psdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='sbdr-ssdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='serialize'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Client'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Client-IBRS'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Client-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Client-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Client-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Client-v4'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Server'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Server-IBRS'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Server-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Server-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Server-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Server-v4'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Server-v5'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Snowridge'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='cldemote'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='core-capability'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdir64b'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdiri'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='mpx'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='split-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Snowridge-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='cldemote'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='core-capability'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdir64b'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdiri'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='mpx'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='split-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Snowridge-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='cldemote'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='core-capability'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdir64b'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdiri'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='split-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Snowridge-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='cldemote'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='core-capability'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdir64b'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdiri'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='split-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Snowridge-v4'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='cldemote'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdir64b'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdiri'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='athlon'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='3dnow'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='3dnowext'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='athlon-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='3dnow'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='3dnowext'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='core2duo'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ss'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='core2duo-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ss'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='coreduo'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ss'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='coreduo-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ss'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='n270'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ss'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='n270-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ss'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='phenom'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='3dnow'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='3dnowext'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='phenom-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='3dnow'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='3dnowext'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </mode>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  </cpu>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <memoryBacking supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <enum name='sourceType'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <value>file</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <value>anonymous</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <value>memfd</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  </memoryBacking>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <devices>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <disk supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='diskDevice'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>disk</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>cdrom</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>floppy</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>lun</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='bus'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>fdc</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>scsi</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>virtio</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>usb</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>sata</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='model'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>virtio</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>virtio-transitional</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>virtio-non-transitional</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </disk>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <graphics supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='type'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>vnc</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>egl-headless</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>dbus</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </graphics>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <video supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='modelType'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>vga</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>cirrus</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>virtio</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>none</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>bochs</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>ramfb</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </video>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <hostdev supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='mode'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>subsystem</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='startupPolicy'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>default</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>mandatory</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>requisite</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>optional</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='subsysType'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>usb</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>pci</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>scsi</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='capsType'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='pciBackend'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </hostdev>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <rng supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='model'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>virtio</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>virtio-transitional</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>virtio-non-transitional</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='backendModel'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>random</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>egd</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>builtin</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </rng>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <filesystem supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='driverType'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>path</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>handle</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>virtiofs</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </filesystem>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <tpm supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='model'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>tpm-tis</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>tpm-crb</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='backendModel'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>emulator</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>external</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='backendVersion'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>2.0</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </tpm>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <redirdev supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='bus'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>usb</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </redirdev>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <channel supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='type'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>pty</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>unix</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </channel>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <crypto supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='model'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='type'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>qemu</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='backendModel'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>builtin</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </crypto>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <interface supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='backendType'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>default</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>passt</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </interface>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <panic supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='model'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>isa</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>hyperv</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </panic>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <console supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='type'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>null</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>vc</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>pty</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>dev</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>file</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>pipe</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>stdio</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>udp</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>tcp</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>unix</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>qemu-vdagent</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>dbus</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </console>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  </devices>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <features>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <gic supported='no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <vmcoreinfo supported='yes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <genid supported='yes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <backingStoreInput supported='yes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <backup supported='yes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <async-teardown supported='yes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <ps2 supported='yes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <sev supported='no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <sgx supported='no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <hyperv supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='features'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>relaxed</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>vapic</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>spinlocks</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>vpindex</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>runtime</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>synic</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>stimer</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>reset</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>vendor_id</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>frequencies</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>reenlightenment</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>tlbflush</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>ipi</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>avic</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>emsr_bitmap</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>xmm_input</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <defaults>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <spinlocks>4095</spinlocks>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <stimer_direct>on</stimer_direct>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <tlbflush_direct>on</tlbflush_direct>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <tlbflush_extended>on</tlbflush_extended>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </defaults>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </hyperv>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <launchSecurity supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='sectype'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>tdx</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </launchSecurity>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  </features>
Dec  1 05:09:39 np0005540825 nova_compute[255113]: </domainCapabilities>
Dec  1 05:09:39 np0005540825 nova_compute[255113]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  1 05:09:39 np0005540825 nova_compute[255113]: 2025-12-01 10:09:39.547 255117 DEBUG nova.virt.libvirt.host [None req-8d76a14b-33fa-4859-ae95-26b12299b48d - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec  1 05:09:39 np0005540825 nova_compute[255113]: <domainCapabilities>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <path>/usr/libexec/qemu-kvm</path>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <domain>kvm</domain>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <arch>i686</arch>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <vcpu max='240'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <iothreads supported='yes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <os supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <enum name='firmware'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <loader supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='type'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>rom</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>pflash</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='readonly'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>yes</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>no</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='secure'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>no</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </loader>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  </os>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <cpu>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <mode name='host-passthrough' supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='hostPassthroughMigratable'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>on</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>off</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </mode>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <mode name='maximum' supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='maximumMigratable'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>on</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>off</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </mode>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <mode name='host-model' supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <vendor>AMD</vendor>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='x2apic'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='tsc-deadline'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='hypervisor'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='tsc_adjust'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='spec-ctrl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='stibp'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='ssbd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='cmp_legacy'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='overflow-recov'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='succor'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='ibrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='amd-ssbd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='virt-ssbd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='lbrv'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='tsc-scale'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='vmcb-clean'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='flushbyasid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='pause-filter'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='pfthreshold'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='svme-addr-chk'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='disable' name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </mode>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <mode name='custom' supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Broadwell'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Broadwell-IBRS'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Broadwell-noTSX'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Broadwell-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Broadwell-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Broadwell-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Broadwell-v4'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Cascadelake-Server'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Cascadelake-Server-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Cascadelake-Server-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Cascadelake-Server-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Cascadelake-Server-v4'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Cascadelake-Server-v5'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Cooperlake'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Cooperlake-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Cooperlake-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Denverton'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='mpx'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Denverton-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='mpx'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Denverton-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Denverton-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Dhyana-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-Genoa'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amd-psfd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='auto-ibrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='no-nested-data-bp'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='null-sel-clr-base'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='stibp-always-on'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-Genoa-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amd-psfd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='auto-ibrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='no-nested-data-bp'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='null-sel-clr-base'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='stibp-always-on'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-Milan'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-Milan-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-Milan-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amd-psfd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='no-nested-data-bp'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='null-sel-clr-base'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='stibp-always-on'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-Rome'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-Rome-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-Rome-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-Rome-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-v4'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='GraniteRapids'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-fp16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-int8'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-tile'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-fp16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fbsdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrc'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fzrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='mcdt-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pbrsb-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='prefetchiti'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='psdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='sbdr-ssdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='serialize'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='tsx-ldtrk'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xfd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='GraniteRapids-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-fp16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-int8'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-tile'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-fp16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fbsdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrc'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fzrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='mcdt-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pbrsb-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='prefetchiti'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='psdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='sbdr-ssdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='serialize'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='tsx-ldtrk'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xfd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='GraniteRapids-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-fp16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-int8'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-tile'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx10'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx10-128'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx10-256'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx10-512'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-fp16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='cldemote'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fbsdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrc'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fzrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='mcdt-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdir64b'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdiri'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pbrsb-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='prefetchiti'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='psdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='sbdr-ssdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='serialize'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ss'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='tsx-ldtrk'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xfd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Haswell'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Haswell-IBRS'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Haswell-noTSX'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Haswell-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Haswell-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Haswell-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Haswell-v4'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Icelake-Server'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Icelake-Server-noTSX'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Icelake-Server-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Icelake-Server-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Icelake-Server-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Icelake-Server-v4'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Icelake-Server-v5'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Icelake-Server-v6'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Icelake-Server-v7'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='IvyBridge'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='IvyBridge-IBRS'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='IvyBridge-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='IvyBridge-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='KnightsMill'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-4fmaps'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-4vnniw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512er'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512pf'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ss'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='KnightsMill-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-4fmaps'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-4vnniw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512er'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512pf'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ss'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Opteron_G4'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fma4'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xop'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Opteron_G4-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fma4'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xop'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Opteron_G5'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fma4'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='tbm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xop'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Opteron_G5-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fma4'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='tbm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xop'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='SapphireRapids'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-int8'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-tile'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-fp16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrc'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fzrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='serialize'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='tsx-ldtrk'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xfd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='SapphireRapids-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-int8'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-tile'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-fp16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrc'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fzrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='serialize'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='tsx-ldtrk'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xfd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='SapphireRapids-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-int8'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-tile'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-fp16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fbsdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrc'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fzrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='psdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='sbdr-ssdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='serialize'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='tsx-ldtrk'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xfd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='SapphireRapids-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-int8'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-tile'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-fp16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='cldemote'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fbsdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrc'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fzrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdir64b'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdiri'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='psdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='sbdr-ssdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='serialize'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ss'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='tsx-ldtrk'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xfd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='SierraForest'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-ne-convert'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni-int8'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='cmpccxadd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fbsdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='mcdt-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pbrsb-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='psdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='sbdr-ssdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='serialize'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='SierraForest-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-ne-convert'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni-int8'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='cmpccxadd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fbsdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='mcdt-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pbrsb-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='psdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='sbdr-ssdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='serialize'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Client'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Client-IBRS'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Client-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Client-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Client-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Client-v4'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Server'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Server-IBRS'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Server-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Server-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Server-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Server-v4'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Server-v5'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Snowridge'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='cldemote'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='core-capability'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdir64b'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdiri'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='mpx'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='split-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Snowridge-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='cldemote'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='core-capability'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdir64b'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdiri'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='mpx'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='split-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Snowridge-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='cldemote'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='core-capability'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdir64b'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdiri'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='split-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Snowridge-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='cldemote'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='core-capability'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdir64b'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdiri'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='split-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Snowridge-v4'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='cldemote'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdir64b'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdiri'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='athlon'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='3dnow'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='3dnowext'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='athlon-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='3dnow'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='3dnowext'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='core2duo'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ss'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='core2duo-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ss'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='coreduo'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ss'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='coreduo-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ss'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='n270'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ss'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='n270-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ss'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='phenom'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='3dnow'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='3dnowext'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='phenom-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='3dnow'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='3dnowext'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </mode>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  </cpu>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <memoryBacking supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <enum name='sourceType'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <value>file</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <value>anonymous</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <value>memfd</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  </memoryBacking>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <devices>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <disk supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='diskDevice'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>disk</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>cdrom</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>floppy</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>lun</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='bus'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>ide</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>fdc</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>scsi</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>virtio</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>usb</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>sata</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='model'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>virtio</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>virtio-transitional</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>virtio-non-transitional</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </disk>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <graphics supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='type'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>vnc</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>egl-headless</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>dbus</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </graphics>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <video supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='modelType'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>vga</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>cirrus</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>virtio</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>none</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>bochs</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>ramfb</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </video>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <hostdev supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='mode'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>subsystem</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='startupPolicy'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>default</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>mandatory</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>requisite</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>optional</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='subsysType'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>usb</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>pci</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>scsi</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='capsType'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='pciBackend'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </hostdev>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <rng supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='model'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>virtio</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>virtio-transitional</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>virtio-non-transitional</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='backendModel'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>random</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>egd</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>builtin</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </rng>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <filesystem supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='driverType'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>path</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>handle</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>virtiofs</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </filesystem>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <tpm supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='model'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>tpm-tis</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>tpm-crb</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='backendModel'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>emulator</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>external</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='backendVersion'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>2.0</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </tpm>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <redirdev supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='bus'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>usb</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </redirdev>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <channel supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='type'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>pty</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>unix</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </channel>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <crypto supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='model'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='type'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>qemu</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='backendModel'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>builtin</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </crypto>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <interface supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='backendType'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>default</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>passt</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </interface>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <panic supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='model'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>isa</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>hyperv</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </panic>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <console supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='type'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>null</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>vc</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>pty</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>dev</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>file</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>pipe</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>stdio</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>udp</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>tcp</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>unix</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>qemu-vdagent</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>dbus</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </console>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  </devices>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <features>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <gic supported='no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <vmcoreinfo supported='yes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <genid supported='yes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <backingStoreInput supported='yes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <backup supported='yes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <async-teardown supported='yes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <ps2 supported='yes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <sev supported='no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <sgx supported='no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <hyperv supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='features'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>relaxed</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>vapic</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>spinlocks</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>vpindex</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>runtime</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>synic</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>stimer</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>reset</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>vendor_id</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>frequencies</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>reenlightenment</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>tlbflush</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>ipi</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>avic</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>emsr_bitmap</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>xmm_input</value>
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <defaults>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <spinlocks>4095</spinlocks>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <stimer_direct>on</stimer_direct>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <tlbflush_direct>on</tlbflush_direct>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <tlbflush_extended>on</tlbflush_extended>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </defaults>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </hyperv>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <launchSecurity supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='sectype'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>tdx</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </launchSecurity>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  </features>
Dec  1 05:09:39 np0005540825 nova_compute[255113]: </domainCapabilities>
Dec  1 05:09:39 np0005540825 nova_compute[255113]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  1 05:09:39 np0005540825 nova_compute[255113]: 2025-12-01 10:09:39.637 255117 DEBUG nova.virt.libvirt.host [None req-8d76a14b-33fa-4859-ae95-26b12299b48d - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  1 05:09:39 np0005540825 nova_compute[255113]: 2025-12-01 10:09:39.641 255117 DEBUG nova.virt.libvirt.host [None req-8d76a14b-33fa-4859-ae95-26b12299b48d - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec  1 05:09:39 np0005540825 nova_compute[255113]: <domainCapabilities>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <path>/usr/libexec/qemu-kvm</path>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <domain>kvm</domain>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <arch>x86_64</arch>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <vcpu max='4096'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <iothreads supported='yes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <os supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <enum name='firmware'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <value>efi</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <loader supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='type'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>rom</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>pflash</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='readonly'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>yes</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>no</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='secure'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>yes</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>no</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </loader>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  </os>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <cpu>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <mode name='host-passthrough' supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='hostPassthroughMigratable'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>on</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>off</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </mode>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <mode name='maximum' supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='maximumMigratable'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>on</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>off</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </mode>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <mode name='host-model' supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <vendor>AMD</vendor>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='x2apic'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='tsc-deadline'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='hypervisor'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='tsc_adjust'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='spec-ctrl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='stibp'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='ssbd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='cmp_legacy'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='overflow-recov'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='succor'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='ibrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='amd-ssbd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='virt-ssbd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='lbrv'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='tsc-scale'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='vmcb-clean'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='flushbyasid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='pause-filter'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='pfthreshold'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='svme-addr-chk'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='disable' name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </mode>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <mode name='custom' supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Broadwell'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Broadwell-IBRS'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Broadwell-noTSX'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Broadwell-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Broadwell-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Broadwell-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Broadwell-v4'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Cascadelake-Server'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Cascadelake-Server-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Cascadelake-Server-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Cascadelake-Server-v3'>
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Cascadelake-Server-v4'>
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Cascadelake-Server-v5'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Cooperlake'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Cooperlake-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Cooperlake-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Denverton'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='mpx'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Denverton-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='mpx'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Denverton-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Denverton-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Dhyana-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-Genoa'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amd-psfd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='auto-ibrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='no-nested-data-bp'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='null-sel-clr-base'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='stibp-always-on'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-Genoa-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amd-psfd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='auto-ibrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='no-nested-data-bp'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='null-sel-clr-base'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='stibp-always-on'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-Milan'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-Milan-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-Milan-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amd-psfd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='no-nested-data-bp'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='null-sel-clr-base'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='stibp-always-on'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-Rome'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-Rome-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-Rome-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-Rome-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-v4'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='GraniteRapids'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-fp16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-int8'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-tile'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-fp16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fbsdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrc'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fzrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='mcdt-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pbrsb-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='prefetchiti'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='psdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='sbdr-ssdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='serialize'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='tsx-ldtrk'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xfd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='GraniteRapids-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-fp16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-int8'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-tile'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-fp16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fbsdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrc'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fzrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='mcdt-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pbrsb-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='prefetchiti'/>
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='psdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='sbdr-ssdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='serialize'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='tsx-ldtrk'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xfd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='GraniteRapids-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-fp16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-int8'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-tile'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni'/>
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx10'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx10-128'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx10-256'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx10-512'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-fp16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='cldemote'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fbsdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrc'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fzrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='mcdt-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdir64b'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdiri'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pbrsb-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='prefetchiti'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='psdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='sbdr-ssdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='serialize'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ss'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='tsx-ldtrk'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xfd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Haswell'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Haswell-IBRS'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Haswell-noTSX'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Haswell-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Haswell-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Haswell-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Haswell-v4'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Icelake-Server'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Icelake-Server-noTSX'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Icelake-Server-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Icelake-Server-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Icelake-Server-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Icelake-Server-v4'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Icelake-Server-v5'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Icelake-Server-v6'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Icelake-Server-v7'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='IvyBridge'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='IvyBridge-IBRS'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='IvyBridge-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='IvyBridge-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='KnightsMill'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-4fmaps'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-4vnniw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512er'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512pf'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ss'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='KnightsMill-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-4fmaps'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-4vnniw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512er'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512pf'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ss'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Opteron_G4'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fma4'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xop'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Opteron_G4-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fma4'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xop'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Opteron_G5'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fma4'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='tbm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xop'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Opteron_G5-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fma4'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='tbm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xop'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='SapphireRapids'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-int8'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-tile'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-fp16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrc'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fzrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='serialize'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='tsx-ldtrk'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xfd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='SapphireRapids-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-int8'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-tile'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni'/>
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-fp16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrc'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fzrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='serialize'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='tsx-ldtrk'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xfd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='SapphireRapids-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-int8'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-tile'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-fp16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fbsdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrc'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fzrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='psdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='sbdr-ssdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='serialize'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='tsx-ldtrk'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xfd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='SapphireRapids-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-int8'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-tile'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-fp16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='cldemote'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fbsdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrc'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fzrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdir64b'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdiri'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='psdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='sbdr-ssdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='serialize'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ss'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='tsx-ldtrk'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xfd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='SierraForest'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-ne-convert'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni-int8'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='cmpccxadd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fbsdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='mcdt-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pbrsb-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='psdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='sbdr-ssdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='serialize'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='SierraForest-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-ne-convert'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni-int8'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='cmpccxadd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fbsdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='mcdt-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pbrsb-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='psdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='sbdr-ssdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='serialize'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Client'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Client-IBRS'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Client-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Client-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Client-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Client-v4'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Server'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Server-IBRS'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Server-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Server-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Server-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Server-v4'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Server-v5'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Snowridge'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='cldemote'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='core-capability'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdir64b'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdiri'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='mpx'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='split-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Snowridge-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='cldemote'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='core-capability'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdir64b'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdiri'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='mpx'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='split-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Snowridge-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='cldemote'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='core-capability'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdir64b'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdiri'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='split-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Snowridge-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='cldemote'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='core-capability'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdir64b'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdiri'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='split-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Snowridge-v4'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='cldemote'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdir64b'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdiri'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='athlon'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='3dnow'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='3dnowext'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='athlon-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='3dnow'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='3dnowext'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='core2duo'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ss'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='core2duo-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ss'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='coreduo'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ss'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='coreduo-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ss'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='n270'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ss'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='n270-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ss'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='phenom'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='3dnow'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='3dnowext'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='phenom-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='3dnow'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='3dnowext'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </mode>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  </cpu>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <memoryBacking supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <enum name='sourceType'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <value>file</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <value>anonymous</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <value>memfd</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  </memoryBacking>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <devices>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <disk supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='diskDevice'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>disk</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>cdrom</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>floppy</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>lun</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='bus'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>fdc</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>scsi</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>virtio</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>usb</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>sata</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='model'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>virtio</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>virtio-transitional</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>virtio-non-transitional</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </disk>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <graphics supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='type'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>vnc</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>egl-headless</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>dbus</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </graphics>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <video supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='modelType'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>vga</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>cirrus</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>virtio</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>none</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>bochs</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>ramfb</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </video>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <hostdev supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='mode'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>subsystem</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='startupPolicy'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>default</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>mandatory</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>requisite</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>optional</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='subsysType'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>usb</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>pci</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>scsi</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='capsType'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='pciBackend'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </hostdev>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <rng supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='model'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>virtio</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>virtio-transitional</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>virtio-non-transitional</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='backendModel'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>random</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>egd</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>builtin</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </rng>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <filesystem supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='driverType'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>path</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>handle</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>virtiofs</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </filesystem>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <tpm supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='model'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>tpm-tis</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>tpm-crb</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='backendModel'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>emulator</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>external</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='backendVersion'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>2.0</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </tpm>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <redirdev supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='bus'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>usb</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </redirdev>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <channel supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='type'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>pty</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>unix</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </channel>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <crypto supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='model'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='type'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>qemu</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='backendModel'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>builtin</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </crypto>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <interface supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='backendType'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>default</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>passt</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </interface>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <panic supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='model'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>isa</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>hyperv</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </panic>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <console supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='type'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>null</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>vc</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>pty</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>dev</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>file</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>pipe</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>stdio</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>udp</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>tcp</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>unix</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>qemu-vdagent</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>dbus</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </console>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  </devices>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <features>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <gic supported='no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <vmcoreinfo supported='yes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <genid supported='yes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <backingStoreInput supported='yes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <backup supported='yes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <async-teardown supported='yes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <ps2 supported='yes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <sev supported='no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <sgx supported='no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <hyperv supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='features'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>relaxed</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>vapic</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>spinlocks</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>vpindex</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>runtime</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>synic</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>stimer</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>reset</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>vendor_id</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>frequencies</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>reenlightenment</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>tlbflush</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>ipi</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>avic</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>emsr_bitmap</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>xmm_input</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <defaults>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <spinlocks>4095</spinlocks>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <stimer_direct>on</stimer_direct>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <tlbflush_direct>on</tlbflush_direct>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <tlbflush_extended>on</tlbflush_extended>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </defaults>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </hyperv>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <launchSecurity supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='sectype'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>tdx</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </launchSecurity>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  </features>
Dec  1 05:09:39 np0005540825 nova_compute[255113]: </domainCapabilities>
Dec  1 05:09:39 np0005540825 nova_compute[255113]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  1 05:09:39 np0005540825 nova_compute[255113]: 2025-12-01 10:09:39.721 255117 DEBUG nova.virt.libvirt.host [None req-8d76a14b-33fa-4859-ae95-26b12299b48d - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec  1 05:09:39 np0005540825 nova_compute[255113]: <domainCapabilities>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <path>/usr/libexec/qemu-kvm</path>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <domain>kvm</domain>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <arch>x86_64</arch>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <vcpu max='240'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <iothreads supported='yes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <os supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <enum name='firmware'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <loader supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='type'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>rom</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>pflash</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='readonly'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>yes</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>no</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='secure'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>no</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </loader>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  </os>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <cpu>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <mode name='host-passthrough' supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='hostPassthroughMigratable'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>on</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>off</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </mode>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <mode name='maximum' supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='maximumMigratable'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>on</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>off</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </mode>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <mode name='host-model' supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <vendor>AMD</vendor>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='x2apic'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='tsc-deadline'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='hypervisor'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='tsc_adjust'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='spec-ctrl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='stibp'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='ssbd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='cmp_legacy'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='overflow-recov'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='succor'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='ibrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='amd-ssbd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='virt-ssbd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='lbrv'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='tsc-scale'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='vmcb-clean'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='flushbyasid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='pause-filter'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='pfthreshold'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='svme-addr-chk'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <feature policy='disable' name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </mode>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <mode name='custom' supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Broadwell'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Broadwell-IBRS'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Broadwell-noTSX'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Broadwell-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Broadwell-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Broadwell-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Broadwell-v4'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Cascadelake-Server'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Cascadelake-Server-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Cascadelake-Server-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Cascadelake-Server-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Cascadelake-Server-v4'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Cascadelake-Server-v5'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Cooperlake'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Cooperlake-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Cooperlake-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Denverton'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='mpx'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Denverton-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='mpx'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Denverton-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Denverton-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Dhyana-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-Genoa'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amd-psfd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='auto-ibrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='no-nested-data-bp'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='null-sel-clr-base'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='stibp-always-on'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-Genoa-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amd-psfd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='auto-ibrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='no-nested-data-bp'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='null-sel-clr-base'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='stibp-always-on'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-Milan'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-Milan-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-Milan-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amd-psfd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='no-nested-data-bp'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='null-sel-clr-base'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='stibp-always-on'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-Rome'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-Rome-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-Rome-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-Rome-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='EPYC-v4'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='GraniteRapids'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-fp16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-int8'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-tile'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-fp16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fbsdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrc'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fzrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='mcdt-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pbrsb-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='prefetchiti'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='psdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='sbdr-ssdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='serialize'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='tsx-ldtrk'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xfd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='GraniteRapids-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-fp16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-int8'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-tile'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-fp16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fbsdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrc'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fzrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='mcdt-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pbrsb-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='prefetchiti'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='psdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='sbdr-ssdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='serialize'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='tsx-ldtrk'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xfd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='GraniteRapids-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-fp16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-int8'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-tile'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx10'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx10-128'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx10-256'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx10-512'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-fp16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='cldemote'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fbsdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrc'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fzrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='mcdt-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdir64b'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdiri'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pbrsb-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='prefetchiti'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='psdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='sbdr-ssdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='serialize'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ss'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='tsx-ldtrk'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xfd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Haswell'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Haswell-IBRS'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Haswell-noTSX'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Haswell-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Haswell-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Haswell-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Haswell-v4'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Icelake-Server'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Icelake-Server-noTSX'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Icelake-Server-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Icelake-Server-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Icelake-Server-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Icelake-Server-v4'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Icelake-Server-v5'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Icelake-Server-v6'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Icelake-Server-v7'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='IvyBridge'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='IvyBridge-IBRS'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='IvyBridge-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='IvyBridge-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='KnightsMill'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-4fmaps'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-4vnniw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512er'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512pf'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ss'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='KnightsMill-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-4fmaps'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-4vnniw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512er'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512pf'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ss'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Opteron_G4'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fma4'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xop'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Opteron_G4-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fma4'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xop'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Opteron_G5'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fma4'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='tbm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xop'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Opteron_G5-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fma4'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='tbm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xop'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='SapphireRapids'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-int8'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-tile'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-fp16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrc'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fzrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='serialize'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='tsx-ldtrk'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xfd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='SapphireRapids-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-int8'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-tile'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-fp16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrc'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fzrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='serialize'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='tsx-ldtrk'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xfd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='SapphireRapids-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-int8'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-tile'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-fp16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fbsdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrc'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fzrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='psdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='sbdr-ssdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='serialize'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='tsx-ldtrk'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xfd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='SapphireRapids-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-int8'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='amx-tile'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-bf16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-fp16'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bitalg'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='cldemote'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fbsdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrc'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fzrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='la57'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdir64b'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdiri'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='psdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='sbdr-ssdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='serialize'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ss'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='taa-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='tsx-ldtrk'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xfd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='SierraForest'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-ne-convert'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni-int8'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='cmpccxadd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fbsdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='mcdt-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pbrsb-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='psdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='sbdr-ssdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='serialize'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='SierraForest-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-ifma'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-ne-convert'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx-vnni-int8'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='cmpccxadd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fbsdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='fsrs'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ibrs-all'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='mcdt-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pbrsb-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='psdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='sbdr-ssdp-no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='serialize'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vaes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Client'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Client-IBRS'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Client-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Client-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Client-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Client-v4'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Server'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Server-IBRS'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Server-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Server-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='hle'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='rtm'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Server-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Server-v4'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Skylake-Server-v5'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512bw'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512cd'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512dq'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512f'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='avx512vl'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='invpcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pcid'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='pku'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Snowridge'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='cldemote'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='core-capability'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdir64b'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdiri'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='mpx'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='split-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Snowridge-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='cldemote'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='core-capability'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdir64b'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdiri'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='mpx'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='split-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Snowridge-v2'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='cldemote'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='core-capability'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdir64b'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdiri'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='split-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Snowridge-v3'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='cldemote'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='core-capability'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdir64b'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdiri'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='split-lock-detect'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='Snowridge-v4'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='cldemote'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='erms'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='gfni'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdir64b'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='movdiri'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='xsaves'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='athlon'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='3dnow'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='3dnowext'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='athlon-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='3dnow'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='3dnowext'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='core2duo'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ss'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='core2duo-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ss'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='coreduo'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ss'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='coreduo-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ss'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='n270'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ss'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='n270-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='ss'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='phenom'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='3dnow'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='3dnowext'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <blockers model='phenom-v1'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='3dnow'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <feature name='3dnowext'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </blockers>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </mode>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  </cpu>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <memoryBacking supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <enum name='sourceType'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <value>file</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <value>anonymous</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <value>memfd</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  </memoryBacking>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <devices>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <disk supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='diskDevice'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>disk</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>cdrom</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>floppy</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>lun</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='bus'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>ide</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>fdc</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>scsi</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>virtio</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>usb</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>sata</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='model'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>virtio</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>virtio-transitional</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>virtio-non-transitional</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </disk>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <graphics supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='type'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>vnc</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>egl-headless</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>dbus</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </graphics>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <video supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='modelType'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>vga</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>cirrus</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>virtio</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>none</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>bochs</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>ramfb</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </video>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <hostdev supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='mode'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>subsystem</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='startupPolicy'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>default</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>mandatory</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>requisite</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>optional</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='subsysType'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>usb</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>pci</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>scsi</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='capsType'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='pciBackend'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </hostdev>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <rng supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='model'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>virtio</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>virtio-transitional</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>virtio-non-transitional</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='backendModel'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>random</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>egd</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>builtin</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </rng>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <filesystem supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='driverType'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>path</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>handle</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>virtiofs</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </filesystem>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <tpm supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='model'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>tpm-tis</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>tpm-crb</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='backendModel'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>emulator</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>external</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='backendVersion'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>2.0</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </tpm>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <redirdev supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='bus'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>usb</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </redirdev>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <channel supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='type'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>pty</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>unix</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </channel>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <crypto supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='model'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='type'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>qemu</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='backendModel'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>builtin</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </crypto>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <interface supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='backendType'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>default</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>passt</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </interface>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <panic supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='model'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>isa</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>hyperv</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </panic>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <console supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='type'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>null</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>vc</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>pty</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>dev</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>file</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>pipe</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>stdio</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>udp</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>tcp</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>unix</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>qemu-vdagent</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>dbus</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </console>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  </devices>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  <features>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <gic supported='no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <vmcoreinfo supported='yes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <genid supported='yes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <backingStoreInput supported='yes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <backup supported='yes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <async-teardown supported='yes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <ps2 supported='yes'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <sev supported='no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <sgx supported='no'/>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <hyperv supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='features'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>relaxed</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>vapic</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>spinlocks</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>vpindex</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>runtime</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>synic</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>stimer</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>reset</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>vendor_id</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>frequencies</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>reenlightenment</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>tlbflush</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>ipi</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>avic</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>emsr_bitmap</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>xmm_input</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <defaults>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <spinlocks>4095</spinlocks>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <stimer_direct>on</stimer_direct>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <tlbflush_direct>on</tlbflush_direct>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <tlbflush_extended>on</tlbflush_extended>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </defaults>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </hyperv>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    <launchSecurity supported='yes'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      <enum name='sectype'>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:        <value>tdx</value>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:      </enum>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:    </launchSecurity>
Dec  1 05:09:39 np0005540825 nova_compute[255113]:  </features>
Dec  1 05:09:39 np0005540825 nova_compute[255113]: </domainCapabilities>
Dec  1 05:09:39 np0005540825 nova_compute[255113]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  1 05:09:39 np0005540825 nova_compute[255113]: 2025-12-01 10:09:39.822 255117 DEBUG nova.virt.libvirt.host [None req-8d76a14b-33fa-4859-ae95-26b12299b48d - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec  1 05:09:39 np0005540825 nova_compute[255113]: 2025-12-01 10:09:39.823 255117 INFO nova.virt.libvirt.host [None req-8d76a14b-33fa-4859-ae95-26b12299b48d - - - - - -] Secure Boot support detected#033[00m
Dec  1 05:09:39 np0005540825 nova_compute[255113]: 2025-12-01 10:09:39.826 255117 INFO nova.virt.libvirt.driver [None req-8d76a14b-33fa-4859-ae95-26b12299b48d - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  1 05:09:39 np0005540825 nova_compute[255113]: 2025-12-01 10:09:39.826 255117 INFO nova.virt.libvirt.driver [None req-8d76a14b-33fa-4859-ae95-26b12299b48d - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  1 05:09:39 np0005540825 nova_compute[255113]: 2025-12-01 10:09:39.838 255117 DEBUG nova.virt.libvirt.driver [None req-8d76a14b-33fa-4859-ae95-26b12299b48d - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Dec  1 05:09:39 np0005540825 nova_compute[255113]: 2025-12-01 10:09:39.861 255117 INFO nova.virt.node [None req-8d76a14b-33fa-4859-ae95-26b12299b48d - - - - - -] Determined node identity 5efe20fe-1981-4bd9-8786-d9fddc89a5ae from /var/lib/nova/compute_id#033[00m
Dec  1 05:09:39 np0005540825 nova_compute[255113]: 2025-12-01 10:09:39.878 255117 WARNING nova.compute.manager [None req-8d76a14b-33fa-4859-ae95-26b12299b48d - - - - - -] Compute nodes ['5efe20fe-1981-4bd9-8786-d9fddc89a5ae'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Dec  1 05:09:39 np0005540825 nova_compute[255113]: 2025-12-01 10:09:39.911 255117 INFO nova.compute.manager [None req-8d76a14b-33fa-4859-ae95-26b12299b48d - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Dec  1 05:09:39 np0005540825 nova_compute[255113]: 2025-12-01 10:09:39.938 255117 WARNING nova.compute.manager [None req-8d76a14b-33fa-4859-ae95-26b12299b48d - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec  1 05:09:39 np0005540825 nova_compute[255113]: 2025-12-01 10:09:39.939 255117 DEBUG oslo_concurrency.lockutils [None req-8d76a14b-33fa-4859-ae95-26b12299b48d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:09:39 np0005540825 nova_compute[255113]: 2025-12-01 10:09:39.939 255117 DEBUG oslo_concurrency.lockutils [None req-8d76a14b-33fa-4859-ae95-26b12299b48d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:09:39 np0005540825 nova_compute[255113]: 2025-12-01 10:09:39.939 255117 DEBUG oslo_concurrency.lockutils [None req-8d76a14b-33fa-4859-ae95-26b12299b48d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:09:39 np0005540825 nova_compute[255113]: 2025-12-01 10:09:39.939 255117 DEBUG nova.compute.resource_tracker [None req-8d76a14b-33fa-4859-ae95-26b12299b48d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 05:09:39 np0005540825 nova_compute[255113]: 2025-12-01 10:09:39.939 255117 DEBUG oslo_concurrency.processutils [None req-8d76a14b-33fa-4859-ae95-26b12299b48d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:09:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:09:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:09:40 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:09:40 np0005540825 systemd-coredump[255619]: Process 243081 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 54:#012#0  0x00007f15889ae32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Dec  1 05:09:40 np0005540825 python3.9[255978]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 05:09:40 np0005540825 systemd[1]: Stopping nova_compute container...
Dec  1 05:09:40 np0005540825 systemd[1]: systemd-coredump@6-255611-0.service: Deactivated successfully.
Dec  1 05:09:40 np0005540825 systemd[1]: systemd-coredump@6-255611-0.service: Consumed 1.250s CPU time.
Dec  1 05:09:40 np0005540825 podman[256007]: 2025-12-01 10:09:40.353730481 +0000 UTC m=+0.042794396 container died 622a92b2cdc67a6e8583860fa92bdde8dd0c70c580c37547ced38640adea5147 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  1 05:09:40 np0005540825 nova_compute[255113]: 2025-12-01 10:09:40.352 255117 DEBUG oslo_concurrency.lockutils [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 05:09:40 np0005540825 nova_compute[255113]: 2025-12-01 10:09:40.353 255117 DEBUG oslo_concurrency.lockutils [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 05:09:40 np0005540825 nova_compute[255113]: 2025-12-01 10:09:40.353 255117 DEBUG oslo_concurrency.lockutils [None req-8e6ab277-5ce1-4de8-82c9-87f9a3bf8476 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 05:09:40 np0005540825 systemd[1]: var-lib-containers-storage-overlay-6ee1c7a0c1d9d147610305627d26ee6c3e77e51dab7a16d668c5e23ae4b23f87-merged.mount: Deactivated successfully.
Dec  1 05:09:40 np0005540825 podman[256007]: 2025-12-01 10:09:40.424825709 +0000 UTC m=+0.113889594 container remove 622a92b2cdc67a6e8583860fa92bdde8dd0c70c580c37547ced38640adea5147 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec  1 05:09:40 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Main process exited, code=exited, status=139/n/a
Dec  1 05:09:40 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Failed with result 'exit-code'.
Dec  1 05:09:40 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Consumed 1.720s CPU time.
Dec  1 05:09:40 np0005540825 podman[256061]: 2025-12-01 10:09:40.701148376 +0000 UTC m=+0.069388513 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  1 05:09:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:09:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:09:40.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:09:41 np0005540825 virtqemud[255660]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec  1 05:09:41 np0005540825 systemd[1]: libpod-09bb02350eb17b03ab54ddb939d2b1a808bc64b77f09d5116bd141e5ccbf5742.scope: Deactivated successfully.
Dec  1 05:09:41 np0005540825 virtqemud[255660]: hostname: compute-0
Dec  1 05:09:41 np0005540825 virtqemud[255660]: End of file while reading data: Input/output error
Dec  1 05:09:41 np0005540825 podman[256003]: 2025-12-01 10:09:41.088584581 +0000 UTC m=+0.784763358 container died 09bb02350eb17b03ab54ddb939d2b1a808bc64b77f09d5116bd141e5ccbf5742 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec  1 05:09:41 np0005540825 systemd[1]: libpod-09bb02350eb17b03ab54ddb939d2b1a808bc64b77f09d5116bd141e5ccbf5742.scope: Consumed 3.868s CPU time.
Dec  1 05:09:41 np0005540825 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-09bb02350eb17b03ab54ddb939d2b1a808bc64b77f09d5116bd141e5ccbf5742-userdata-shm.mount: Deactivated successfully.
Dec  1 05:09:41 np0005540825 systemd[1]: var-lib-containers-storage-overlay-87da5fbaf8eca9472d533ac968b5fd1e135728ba235fb2a71f3271f4b255806c-merged.mount: Deactivated successfully.
Dec  1 05:09:41 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v561: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:09:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:09:41] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec  1 05:09:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:09:41] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec  1 05:09:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:09:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:09:41.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:09:42 np0005540825 podman[256003]: 2025-12-01 10:09:42.506042362 +0000 UTC m=+2.202221179 container cleanup 09bb02350eb17b03ab54ddb939d2b1a808bc64b77f09d5116bd141e5ccbf5742 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec  1 05:09:42 np0005540825 podman[256003]: nova_compute
Dec  1 05:09:42 np0005540825 podman[256123]: nova_compute
Dec  1 05:09:42 np0005540825 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Dec  1 05:09:42 np0005540825 systemd[1]: Stopped nova_compute container.
Dec  1 05:09:42 np0005540825 systemd[1]: Starting nova_compute container...
Dec  1 05:09:42 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:09:42 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87da5fbaf8eca9472d533ac968b5fd1e135728ba235fb2a71f3271f4b255806c/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  1 05:09:42 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87da5fbaf8eca9472d533ac968b5fd1e135728ba235fb2a71f3271f4b255806c/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec  1 05:09:42 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87da5fbaf8eca9472d533ac968b5fd1e135728ba235fb2a71f3271f4b255806c/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec  1 05:09:42 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87da5fbaf8eca9472d533ac968b5fd1e135728ba235fb2a71f3271f4b255806c/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  1 05:09:42 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87da5fbaf8eca9472d533ac968b5fd1e135728ba235fb2a71f3271f4b255806c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  1 05:09:42 np0005540825 podman[256136]: 2025-12-01 10:09:42.812578073 +0000 UTC m=+0.146304709 container init 09bb02350eb17b03ab54ddb939d2b1a808bc64b77f09d5116bd141e5ccbf5742 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 05:09:42 np0005540825 podman[256136]: 2025-12-01 10:09:42.826781166 +0000 UTC m=+0.160507742 container start 09bb02350eb17b03ab54ddb939d2b1a808bc64b77f09d5116bd141e5ccbf5742 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Dec  1 05:09:42 np0005540825 podman[256136]: nova_compute
Dec  1 05:09:42 np0005540825 nova_compute[256151]: + sudo -E kolla_set_configs
Dec  1 05:09:42 np0005540825 systemd[1]: Started nova_compute container.
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Validating config file
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Copying service configuration files
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Deleting /etc/ceph
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Creating directory /etc/ceph
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Setting permission for /etc/ceph
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Writing out command to execute
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  1 05:09:42 np0005540825 nova_compute[256151]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  1 05:09:42 np0005540825 nova_compute[256151]: ++ cat /run_command
Dec  1 05:09:42 np0005540825 nova_compute[256151]: + CMD=nova-compute
Dec  1 05:09:42 np0005540825 nova_compute[256151]: + ARGS=
Dec  1 05:09:42 np0005540825 nova_compute[256151]: + sudo kolla_copy_cacerts
Dec  1 05:09:42 np0005540825 nova_compute[256151]: + [[ ! -n '' ]]
Dec  1 05:09:42 np0005540825 nova_compute[256151]: + . kolla_extend_start
Dec  1 05:09:42 np0005540825 nova_compute[256151]: + echo 'Running command: '\''nova-compute'\'''
Dec  1 05:09:42 np0005540825 nova_compute[256151]: Running command: 'nova-compute'
Dec  1 05:09:42 np0005540825 nova_compute[256151]: + umask 0022
Dec  1 05:09:42 np0005540825 nova_compute[256151]: + exec nova-compute
Dec  1 05:09:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:09:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:09:42.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:09:43 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v562: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:09:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:09:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:09:43.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:09:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:09:43.581Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:09:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:09:43.581Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:09:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:09:43.582Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:09:43 np0005540825 python3.9[256315]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec  1 05:09:44 np0005540825 systemd[1]: Started libpod-conmon-ca896e0bfad55626a9f81954231a858c2995f203dd2970ce570c6b9c7bcc1d26.scope.
Dec  1 05:09:44 np0005540825 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 05:09:44 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:09:44 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5396458638fc2b8672bcaaaca48c319972b9069d3612c0f516f8059b411a9dc6/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Dec  1 05:09:44 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5396458638fc2b8672bcaaaca48c319972b9069d3612c0f516f8059b411a9dc6/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  1 05:09:44 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5396458638fc2b8672bcaaaca48c319972b9069d3612c0f516f8059b411a9dc6/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Dec  1 05:09:44 np0005540825 podman[256342]: 2025-12-01 10:09:44.170426495 +0000 UTC m=+0.158511838 container init ca896e0bfad55626a9f81954231a858c2995f203dd2970ce570c6b9c7bcc1d26 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 05:09:44 np0005540825 podman[256342]: 2025-12-01 10:09:44.181441723 +0000 UTC m=+0.169527066 container start ca896e0bfad55626a9f81954231a858c2995f203dd2970ce570c6b9c7bcc1d26 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  1 05:09:44 np0005540825 python3.9[256315]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Dec  1 05:09:44 np0005540825 nova_compute_init[256365]: INFO:nova_statedir:Applying nova statedir ownership
Dec  1 05:09:44 np0005540825 nova_compute_init[256365]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Dec  1 05:09:44 np0005540825 nova_compute_init[256365]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Dec  1 05:09:44 np0005540825 nova_compute_init[256365]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Dec  1 05:09:44 np0005540825 nova_compute_init[256365]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Dec  1 05:09:44 np0005540825 nova_compute_init[256365]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Dec  1 05:09:44 np0005540825 nova_compute_init[256365]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Dec  1 05:09:44 np0005540825 nova_compute_init[256365]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Dec  1 05:09:44 np0005540825 nova_compute_init[256365]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Dec  1 05:09:44 np0005540825 nova_compute_init[256365]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Dec  1 05:09:44 np0005540825 nova_compute_init[256365]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Dec  1 05:09:44 np0005540825 nova_compute_init[256365]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Dec  1 05:09:44 np0005540825 nova_compute_init[256365]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Dec  1 05:09:44 np0005540825 nova_compute_init[256365]: INFO:nova_statedir:Nova statedir ownership complete
Dec  1 05:09:44 np0005540825 systemd[1]: libpod-ca896e0bfad55626a9f81954231a858c2995f203dd2970ce570c6b9c7bcc1d26.scope: Deactivated successfully.
Dec  1 05:09:44 np0005540825 podman[256366]: 2025-12-01 10:09:44.270432484 +0000 UTC m=+0.047596175 container died ca896e0bfad55626a9f81954231a858c2995f203dd2970ce570c6b9c7bcc1d26 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute_init)
Dec  1 05:09:44 np0005540825 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ca896e0bfad55626a9f81954231a858c2995f203dd2970ce570c6b9c7bcc1d26-userdata-shm.mount: Deactivated successfully.
Dec  1 05:09:44 np0005540825 systemd[1]: var-lib-containers-storage-overlay-5396458638fc2b8672bcaaaca48c319972b9069d3612c0f516f8059b411a9dc6-merged.mount: Deactivated successfully.
Dec  1 05:09:44 np0005540825 podman[256376]: 2025-12-01 10:09:44.360491284 +0000 UTC m=+0.083795392 container cleanup ca896e0bfad55626a9f81954231a858c2995f203dd2970ce570c6b9c7bcc1d26 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=nova_compute_init, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 05:09:44 np0005540825 systemd[1]: libpod-conmon-ca896e0bfad55626a9f81954231a858c2995f203dd2970ce570c6b9c7bcc1d26.scope: Deactivated successfully.
Dec  1 05:09:44 np0005540825 nova_compute[256151]: 2025-12-01 10:09:44.868 256155 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  1 05:09:44 np0005540825 nova_compute[256151]: 2025-12-01 10:09:44.869 256155 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  1 05:09:44 np0005540825 nova_compute[256151]: 2025-12-01 10:09:44.869 256155 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  1 05:09:44 np0005540825 nova_compute[256151]: 2025-12-01 10:09:44.869 256155 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Dec  1 05:09:44 np0005540825 systemd[1]: session-54.scope: Deactivated successfully.
Dec  1 05:09:44 np0005540825 systemd[1]: session-54.scope: Consumed 2min 42.769s CPU time.
Dec  1 05:09:44 np0005540825 systemd-logind[789]: Session 54 logged out. Waiting for processes to exit.
Dec  1 05:09:44 np0005540825 systemd-logind[789]: Removed session 54.
Dec  1 05:09:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:09:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:09:44.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:09:44 np0005540825 nova_compute[256151]: 2025-12-01 10:09:44.995 256155 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:09:45 np0005540825 nova_compute[256151]: 2025-12-01 10:09:45.017 256155 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:09:45 np0005540825 nova_compute[256151]: 2025-12-01 10:09:45.018 256155 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Dec  1 05:09:45 np0005540825 podman[256430]: 2025-12-01 10:09:45.033825295 +0000 UTC m=+0.058937252 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  1 05:09:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:09:45 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v563: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:09:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:09:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:09:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:09:45.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.124 256155 INFO nova.virt.driver [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.255 256155 INFO nova.compute.provider_config [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.266 256155 DEBUG oslo_concurrency.lockutils [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.267 256155 DEBUG oslo_concurrency.lockutils [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.267 256155 DEBUG oslo_concurrency.lockutils [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.267 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.267 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.267 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.268 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.268 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.268 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.268 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.268 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.268 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.268 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.269 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.269 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.269 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.269 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.269 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.269 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.269 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.269 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.270 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.270 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.270 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.270 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.270 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.270 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.270 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.271 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.271 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.271 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.271 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.271 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.271 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.271 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.272 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.272 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.272 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.272 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.272 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.272 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.272 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.273 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.273 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.273 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.273 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.273 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.273 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.273 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.274 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.274 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.274 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.274 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.274 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.274 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.274 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.275 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.275 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.275 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.275 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.275 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.275 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.275 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.276 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.276 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.276 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.276 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.276 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.276 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.276 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.277 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.277 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.277 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.277 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.277 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.277 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.277 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.277 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.278 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.278 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.278 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.278 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.278 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.278 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.278 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.279 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.279 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.279 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.279 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.279 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.279 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.279 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.279 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.280 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.280 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.280 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.280 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/100946 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.280 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.280 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.281 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.281 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.281 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.281 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.281 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.281 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.282 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.282 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.282 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.282 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.282 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.282 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.282 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.282 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.283 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.283 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.283 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.283 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.283 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.283 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.283 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.284 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.284 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.284 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.284 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.284 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.284 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.284 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.284 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.285 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.285 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.285 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.285 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.285 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.285 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.285 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.285 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.286 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.286 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.286 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.286 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.286 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.286 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.286 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.287 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.287 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.287 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.287 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.287 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.287 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.287 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.287 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.288 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.288 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.288 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.288 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.288 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.288 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.288 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.289 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.289 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.289 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.289 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.289 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.289 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.289 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.290 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.290 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.290 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.290 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.290 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.290 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.290 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.291 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.291 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.291 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.291 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.291 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.291 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.291 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.291 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.292 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.292 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.292 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.292 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.292 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.292 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.292 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.293 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.293 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.293 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.293 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.293 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.293 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.293 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.294 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.294 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.294 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.294 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.294 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.294 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.294 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.294 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.295 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.295 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.295 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.295 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.295 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.295 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.295 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.296 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.296 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.296 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.296 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.296 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.296 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.296 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.297 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.297 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.297 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.297 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.297 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.297 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.297 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.297 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.298 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.298 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.298 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.298 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.298 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.298 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.298 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.299 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.299 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.299 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.299 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.299 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.299 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.299 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.299 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.300 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.300 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.300 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.300 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.300 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.300 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.300 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.301 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.301 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.301 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.301 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.301 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.301 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.302 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.302 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.302 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.302 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.302 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.302 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.302 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.302 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.303 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.303 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.303 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.303 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.303 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.303 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.303 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.304 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.304 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.304 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.304 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.304 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.304 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.304 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.305 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.305 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.305 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.305 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.305 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.305 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.305 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.305 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.306 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.306 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.306 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.306 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.306 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.306 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.306 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.307 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.307 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.307 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.307 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.307 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.307 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.307 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.308 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.308 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.308 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.308 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.308 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.308 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.308 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.308 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.309 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.309 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.309 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.309 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.309 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.310 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.310 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.310 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.310 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.310 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.310 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.310 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.310 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.311 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.311 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.311 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.311 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.311 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.311 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.311 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.312 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.312 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.312 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.312 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.312 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.312 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.312 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.312 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.313 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.313 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.313 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.313 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.313 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.313 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.313 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.314 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.314 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.314 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.314 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.314 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.314 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.314 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.315 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.315 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.315 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.315 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.315 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.315 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.315 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.316 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.316 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.316 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.316 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.316 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.316 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.317 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.317 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.317 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.317 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.317 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.317 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.317 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.318 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.318 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.318 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.318 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.318 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.318 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.319 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.319 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.319 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.319 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.319 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.319 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.319 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.319 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.320 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.320 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.320 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.320 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.320 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.320 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.320 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.320 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.321 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.321 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.321 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.321 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.321 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.321 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.321 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.322 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.322 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.322 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.322 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.322 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.322 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.322 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.323 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.323 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.323 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.323 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.323 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.323 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.323 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.324 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.324 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.324 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.324 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.324 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.324 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.324 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.325 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.325 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.325 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.325 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.325 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.325 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.325 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.326 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.326 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.326 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.326 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.326 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.326 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.326 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.326 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.327 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.327 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.327 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.327 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.327 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.327 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.328 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.328 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.328 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.328 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.328 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.328 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.328 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.328 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.329 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.329 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.329 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.329 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.329 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.329 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.329 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.330 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.330 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.330 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.330 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.330 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.330 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.331 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.331 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.331 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.331 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.331 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.331 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.331 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.332 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.332 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.332 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.332 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.332 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.332 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.332 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.333 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.333 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.333 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.333 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.333 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.333 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.333 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.334 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.334 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.334 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.334 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.334 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.334 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.334 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.335 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.335 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.335 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.335 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.335 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.335 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.336 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.336 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.336 256155 WARNING oslo_config.cfg [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec  1 05:09:46 np0005540825 nova_compute[256151]: live_migration_uri is deprecated for removal in favor of two other options that
Dec  1 05:09:46 np0005540825 nova_compute[256151]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec  1 05:09:46 np0005540825 nova_compute[256151]: and ``live_migration_inbound_addr`` respectively.
Dec  1 05:09:46 np0005540825 nova_compute[256151]: ).  Its value may be silently ignored in the future.#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.336 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.336 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.336 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.337 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.337 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.337 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.337 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.337 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.337 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.337 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.338 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.338 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.338 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.338 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.338 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.338 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.338 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.339 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.339 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.rbd_secret_uuid        = 365f19c2-81e5-5edd-b6b4-280555214d3a log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.339 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.339 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.339 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.339 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.339 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.340 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.340 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.340 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.340 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.340 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.340 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.340 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.341 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.341 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.341 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.341 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.341 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.342 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.342 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.342 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.342 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.342 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.342 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.342 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.343 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.343 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.343 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.343 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.343 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.343 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.343 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.344 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.344 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.344 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.344 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.344 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.344 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.344 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.345 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.345 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.345 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.345 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.345 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.345 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.345 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.346 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.346 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.346 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.346 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.346 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.346 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.346 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.347 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.347 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.347 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.347 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.347 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.347 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.347 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.347 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.348 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.348 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.348 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.348 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.348 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.348 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.349 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.349 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.349 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.349 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.349 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.349 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.350 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.350 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.350 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.350 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.350 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.350 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.350 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.351 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.351 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.351 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.351 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.351 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.351 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.351 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.352 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.352 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.352 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.352 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.352 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.352 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.352 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.353 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.353 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.353 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.353 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.353 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.353 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.354 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.354 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.354 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.354 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.354 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.354 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.355 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.355 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.355 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.355 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.355 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.355 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.355 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.356 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.356 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.356 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.356 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.356 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.356 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.356 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.357 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.357 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.357 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.357 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.357 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.358 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.358 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.358 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.358 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.358 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.358 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.358 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.359 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.359 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.359 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.359 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.359 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.359 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.359 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.360 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.360 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.360 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.360 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.360 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.360 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.360 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.360 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.361 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.361 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.361 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.361 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.361 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.361 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.361 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.362 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.362 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.362 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.362 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.362 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.362 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.362 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.363 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.363 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.363 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.363 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.363 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.363 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.364 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.364 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.364 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.364 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.364 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.364 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.364 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.365 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.365 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.365 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.365 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.365 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.365 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.366 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.366 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.366 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.366 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.366 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.366 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.366 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.366 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.367 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.367 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.367 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.367 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.367 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.367 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.367 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.368 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.368 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.368 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.368 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.368 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.368 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.368 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.369 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.369 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.369 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.369 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.369 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.369 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.370 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.370 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.370 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.370 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.370 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.370 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.370 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.370 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.371 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.371 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.371 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.371 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.371 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.371 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.371 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.372 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.372 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.372 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.372 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.372 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.372 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.373 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.373 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.373 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.373 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.373 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.374 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.374 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.374 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.374 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.374 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.374 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.374 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.375 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.375 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.375 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.375 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.375 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.376 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.376 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.376 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.376 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.376 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.376 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.376 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.377 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.377 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.377 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.377 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.377 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.377 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.377 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.378 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.378 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.378 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.378 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.378 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.378 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.378 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.379 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.379 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.379 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.379 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.379 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.379 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.379 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.380 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.380 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.380 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.380 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.380 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.381 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.381 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.381 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.381 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.381 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.381 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.381 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.382 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.382 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.382 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.382 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.382 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.382 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.382 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.383 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.383 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.383 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.383 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.383 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.383 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.384 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.384 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.384 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.384 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.384 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.384 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.384 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.385 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.385 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.385 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.385 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.385 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.385 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.385 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.386 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.386 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.386 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.386 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.386 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.386 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.386 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.387 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.387 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.387 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.387 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.387 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.387 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.388 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.388 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.388 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.388 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.388 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.388 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.388 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.389 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.389 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.389 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.389 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.389 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.389 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.389 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.390 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.390 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.390 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.390 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.390 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.390 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.391 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.391 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.391 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.391 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.391 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.391 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.391 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.392 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.392 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.392 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.392 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.392 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.392 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.392 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.392 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.393 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.393 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.393 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.393 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.393 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.393 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.394 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.394 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.394 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.394 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.394 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.394 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.394 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.395 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.395 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.395 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.395 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.395 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.395 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.395 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.396 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.396 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.396 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.396 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.396 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.396 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.397 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.397 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.397 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.397 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.397 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.397 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.398 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.398 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.398 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.398 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.398 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.398 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.398 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.398 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.399 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.399 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.399 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.399 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.399 256155 DEBUG oslo_service.service [None req-d8b6c763-b5e2-4215-ae1e-3f7ce56ab480 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.400 256155 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.415 256155 INFO nova.virt.node [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] Determined node identity 5efe20fe-1981-4bd9-8786-d9fddc89a5ae from /var/lib/nova/compute_id#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.416 256155 DEBUG nova.virt.libvirt.host [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.416 256155 DEBUG nova.virt.libvirt.host [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.417 256155 DEBUG nova.virt.libvirt.host [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.417 256155 DEBUG nova.virt.libvirt.host [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.430 256155 DEBUG nova.virt.libvirt.host [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f2b8291fbb0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.432 256155 DEBUG nova.virt.libvirt.host [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f2b8291fbb0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.433 256155 INFO nova.virt.libvirt.driver [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] Connection event '1' reason 'None'#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.444 256155 INFO nova.virt.libvirt.host [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] Libvirt host capabilities <capabilities>
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  <host>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <uuid>4cd03307-de0c-4b81-bfb4-f23408ecf241</uuid>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <cpu>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <arch>x86_64</arch>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model>EPYC-Rome-v4</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <vendor>AMD</vendor>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <microcode version='16777317'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <signature family='23' model='49' stepping='0'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <maxphysaddr mode='emulate' bits='40'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature name='x2apic'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature name='tsc-deadline'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature name='osxsave'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature name='hypervisor'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature name='tsc_adjust'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature name='spec-ctrl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature name='stibp'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature name='arch-capabilities'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature name='ssbd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature name='cmp_legacy'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature name='topoext'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature name='virt-ssbd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature name='lbrv'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature name='tsc-scale'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature name='vmcb-clean'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature name='pause-filter'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature name='pfthreshold'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature name='svme-addr-chk'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature name='rdctl-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature name='skip-l1dfl-vmentry'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature name='mds-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature name='pschange-mc-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <pages unit='KiB' size='4'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <pages unit='KiB' size='2048'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <pages unit='KiB' size='1048576'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </cpu>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <power_management>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <suspend_mem/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </power_management>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <iommu support='no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <migration_features>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <live/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <uri_transports>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <uri_transport>tcp</uri_transport>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <uri_transport>rdma</uri_transport>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </uri_transports>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </migration_features>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <topology>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <cells num='1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <cell id='0'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:          <memory unit='KiB'>7864324</memory>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:          <pages unit='KiB' size='4'>1966081</pages>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:          <pages unit='KiB' size='2048'>0</pages>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:          <pages unit='KiB' size='1048576'>0</pages>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:          <distances>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:            <sibling id='0' value='10'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:          </distances>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:          <cpus num='8'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:          </cpus>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        </cell>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </cells>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </topology>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <cache>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </cache>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <secmodel>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model>selinux</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <doi>0</doi>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </secmodel>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <secmodel>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model>dac</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <doi>0</doi>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <baselabel type='kvm'>+107:+107</baselabel>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <baselabel type='qemu'>+107:+107</baselabel>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </secmodel>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  </host>
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  <guest>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <os_type>hvm</os_type>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <arch name='i686'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <wordsize>32</wordsize>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <domain type='qemu'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <domain type='kvm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </arch>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <features>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <pae/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <nonpae/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <acpi default='on' toggle='yes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <apic default='on' toggle='no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <cpuselection/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <deviceboot/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <disksnapshot default='on' toggle='no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <externalSnapshot/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </features>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  </guest>
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  <guest>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <os_type>hvm</os_type>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <arch name='x86_64'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <wordsize>64</wordsize>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <domain type='qemu'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <domain type='kvm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </arch>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <features>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <acpi default='on' toggle='yes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <apic default='on' toggle='no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <cpuselection/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <deviceboot/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <disksnapshot default='on' toggle='no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <externalSnapshot/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </features>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  </guest>
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 
Dec  1 05:09:46 np0005540825 nova_compute[256151]: </capabilities>
Dec  1 05:09:46 np0005540825 nova_compute[256151]: #033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.451 256155 DEBUG nova.virt.libvirt.volume.mount [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.458 256155 DEBUG nova.virt.libvirt.host [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.463 256155 DEBUG nova.virt.libvirt.host [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec  1 05:09:46 np0005540825 nova_compute[256151]: <domainCapabilities>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  <path>/usr/libexec/qemu-kvm</path>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  <domain>kvm</domain>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  <arch>i686</arch>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  <vcpu max='4096'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  <iothreads supported='yes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  <os supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <enum name='firmware'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <loader supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='type'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>rom</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>pflash</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='readonly'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>yes</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>no</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='secure'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>no</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </loader>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  </os>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  <cpu>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <mode name='host-passthrough' supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='hostPassthroughMigratable'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>on</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>off</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </mode>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <mode name='maximum' supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='maximumMigratable'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>on</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>off</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </mode>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <mode name='host-model' supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <vendor>AMD</vendor>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='x2apic'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='tsc-deadline'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='hypervisor'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='tsc_adjust'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='spec-ctrl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='stibp'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='ssbd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='cmp_legacy'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='overflow-recov'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='succor'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='ibrs'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='amd-ssbd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='virt-ssbd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='lbrv'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='tsc-scale'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='vmcb-clean'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='flushbyasid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='pause-filter'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='pfthreshold'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='svme-addr-chk'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='disable' name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </mode>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <mode name='custom' supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Broadwell'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Broadwell-IBRS'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Broadwell-noTSX'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Broadwell-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Broadwell-v2'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Broadwell-v3'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Broadwell-v4'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Cascadelake-Server'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Cascadelake-Server-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Cascadelake-Server-v2'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Cascadelake-Server-v3'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Cascadelake-Server-v4'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Cascadelake-Server-v5'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Cooperlake'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-bf16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='taa-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Cooperlake-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-bf16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='taa-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Cooperlake-v2'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-bf16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='taa-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Denverton'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='mpx'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Denverton-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='mpx'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Denverton-v2'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Denverton-v3'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Dhyana-v2'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='EPYC-Genoa'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amd-psfd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='auto-ibrs'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-bf16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bitalg'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512ifma'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='la57'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='no-nested-data-bp'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='null-sel-clr-base'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='stibp-always-on'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vaes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='EPYC-Genoa-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amd-psfd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='auto-ibrs'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-bf16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bitalg'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512ifma'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='la57'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='no-nested-data-bp'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='null-sel-clr-base'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='stibp-always-on'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vaes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='EPYC-Milan'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='EPYC-Milan-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='EPYC-Milan-v2'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amd-psfd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='no-nested-data-bp'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='null-sel-clr-base'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='stibp-always-on'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vaes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='EPYC-Rome'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='EPYC-Rome-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='EPYC-Rome-v2'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='EPYC-Rome-v3'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='EPYC-v3'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='EPYC-v4'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='GraniteRapids'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-bf16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-fp16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-int8'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-tile'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx-vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-bf16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-fp16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bitalg'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512ifma'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fbsdp-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrc'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrs'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fzrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='la57'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='mcdt-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pbrsb-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='prefetchiti'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='psdp-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='sbdr-ssdp-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='serialize'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='taa-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='tsx-ldtrk'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vaes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xfd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='GraniteRapids-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-bf16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-fp16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-int8'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-tile'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx-vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-bf16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-fp16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bitalg'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512ifma'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fbsdp-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrc'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrs'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fzrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='la57'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='mcdt-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pbrsb-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='prefetchiti'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='psdp-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='sbdr-ssdp-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='serialize'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='taa-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='tsx-ldtrk'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vaes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xfd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='GraniteRapids-v2'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-bf16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-fp16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-int8'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-tile'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx-vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx10'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx10-128'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx10-256'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx10-512'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-bf16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-fp16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bitalg'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512ifma'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='cldemote'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fbsdp-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrc'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrs'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fzrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='la57'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='mcdt-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='movdir64b'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='movdiri'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pbrsb-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='prefetchiti'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='psdp-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='sbdr-ssdp-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='serialize'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ss'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='taa-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='tsx-ldtrk'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vaes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xfd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Haswell'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Haswell-IBRS'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Haswell-noTSX'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Haswell-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Haswell-v2'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Haswell-v3'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Haswell-v4'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Icelake-Server'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bitalg'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='la57'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vaes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Icelake-Server-noTSX'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bitalg'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='la57'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vaes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Icelake-Server-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bitalg'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='la57'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vaes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Icelake-Server-v2'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bitalg'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='la57'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vaes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Icelake-Server-v3'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bitalg'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='la57'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='taa-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vaes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Icelake-Server-v4'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bitalg'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512ifma'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='la57'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='taa-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vaes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Icelake-Server-v5'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bitalg'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512ifma'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='la57'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='taa-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vaes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Icelake-Server-v6'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bitalg'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512ifma'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='la57'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='taa-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vaes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Icelake-Server-v7'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bitalg'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512ifma'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='la57'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='taa-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vaes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='IvyBridge'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='IvyBridge-IBRS'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='IvyBridge-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='IvyBridge-v2'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='KnightsMill'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-4fmaps'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-4vnniw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512er'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512pf'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ss'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='KnightsMill-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-4fmaps'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-4vnniw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512er'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512pf'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ss'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Opteron_G4'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fma4'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xop'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Opteron_G4-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fma4'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xop'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Opteron_G5'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fma4'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='tbm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xop'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Opteron_G5-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fma4'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='tbm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xop'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='SapphireRapids'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-bf16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-int8'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-tile'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx-vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-bf16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-fp16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bitalg'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512ifma'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrc'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrs'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fzrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='la57'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='serialize'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='taa-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='tsx-ldtrk'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vaes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xfd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='SapphireRapids-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-bf16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-int8'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-tile'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx-vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-bf16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-fp16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bitalg'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512ifma'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrc'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrs'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fzrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='la57'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='serialize'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='taa-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='tsx-ldtrk'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vaes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xfd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='SapphireRapids-v2'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-bf16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-int8'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-tile'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx-vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-bf16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-fp16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bitalg'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512ifma'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fbsdp-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrc'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrs'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fzrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='la57'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='psdp-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='sbdr-ssdp-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='serialize'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='taa-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='tsx-ldtrk'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vaes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xfd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='SapphireRapids-v3'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-bf16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-int8'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-tile'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx-vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-bf16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-fp16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bitalg'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512ifma'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='cldemote'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fbsdp-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrc'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrs'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fzrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='la57'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='movdir64b'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='movdiri'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='psdp-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='sbdr-ssdp-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='serialize'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ss'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='taa-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='tsx-ldtrk'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vaes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xfd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='SierraForest'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx-ifma'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx-ne-convert'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx-vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx-vnni-int8'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='cmpccxadd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fbsdp-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrs'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='mcdt-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pbrsb-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='psdp-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='sbdr-ssdp-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='serialize'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vaes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='SierraForest-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx-ifma'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx-ne-convert'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx-vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx-vnni-int8'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='cmpccxadd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fbsdp-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrs'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='mcdt-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pbrsb-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='psdp-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='sbdr-ssdp-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='serialize'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vaes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Skylake-Client'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Skylake-Client-IBRS'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Skylake-Client-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Skylake-Client-v2'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Skylake-Client-v3'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Skylake-Client-v4'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Skylake-Server'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Skylake-Server-IBRS'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Skylake-Server-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Skylake-Server-v2'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Skylake-Server-v3'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Skylake-Server-v4'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Skylake-Server-v5'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Snowridge'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='cldemote'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='core-capability'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='movdir64b'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='movdiri'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='mpx'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='split-lock-detect'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Snowridge-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='cldemote'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='core-capability'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='movdir64b'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='movdiri'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='mpx'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='split-lock-detect'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Snowridge-v2'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='cldemote'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='core-capability'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='movdir64b'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='movdiri'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='split-lock-detect'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Snowridge-v3'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='cldemote'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='core-capability'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='movdir64b'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='movdiri'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='split-lock-detect'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Snowridge-v4'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='cldemote'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='movdir64b'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='movdiri'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='athlon'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='3dnow'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='3dnowext'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='athlon-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='3dnow'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='3dnowext'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='core2duo'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ss'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='core2duo-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ss'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='coreduo'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ss'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='coreduo-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ss'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='n270'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ss'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='n270-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ss'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='phenom'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='3dnow'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='3dnowext'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='phenom-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='3dnow'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='3dnowext'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </mode>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  </cpu>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  <memoryBacking supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <enum name='sourceType'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <value>file</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <value>anonymous</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <value>memfd</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  </memoryBacking>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  <devices>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <disk supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='diskDevice'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>disk</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>cdrom</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>floppy</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>lun</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='bus'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>fdc</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>scsi</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>virtio</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>usb</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>sata</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='model'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>virtio</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>virtio-transitional</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>virtio-non-transitional</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </disk>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <graphics supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='type'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>vnc</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>egl-headless</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>dbus</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </graphics>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <video supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='modelType'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>vga</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>cirrus</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>virtio</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>none</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>bochs</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>ramfb</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </video>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <hostdev supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='mode'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>subsystem</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='startupPolicy'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>default</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>mandatory</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>requisite</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>optional</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='subsysType'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>usb</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>pci</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>scsi</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='capsType'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='pciBackend'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </hostdev>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <rng supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='model'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>virtio</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>virtio-transitional</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>virtio-non-transitional</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='backendModel'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>random</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>egd</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>builtin</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </rng>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <filesystem supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='driverType'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>path</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>handle</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>virtiofs</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </filesystem>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <tpm supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='model'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>tpm-tis</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>tpm-crb</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='backendModel'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>emulator</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>external</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='backendVersion'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>2.0</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </tpm>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <redirdev supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='bus'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>usb</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </redirdev>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <channel supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='type'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>pty</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>unix</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </channel>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <crypto supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='model'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='type'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>qemu</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='backendModel'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>builtin</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </crypto>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <interface supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='backendType'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>default</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>passt</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </interface>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <panic supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='model'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>isa</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>hyperv</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </panic>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <console supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='type'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>null</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>vc</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>pty</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>dev</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>file</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>pipe</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>stdio</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>udp</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>tcp</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>unix</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>qemu-vdagent</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>dbus</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </console>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  </devices>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  <features>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <gic supported='no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <vmcoreinfo supported='yes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <genid supported='yes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <backingStoreInput supported='yes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <backup supported='yes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <async-teardown supported='yes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <ps2 supported='yes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <sev supported='no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <sgx supported='no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <hyperv supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='features'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>relaxed</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>vapic</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>spinlocks</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>vpindex</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>runtime</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>synic</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>stimer</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>reset</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>vendor_id</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>frequencies</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>reenlightenment</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>tlbflush</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>ipi</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>avic</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>emsr_bitmap</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>xmm_input</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <defaults>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <spinlocks>4095</spinlocks>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <stimer_direct>on</stimer_direct>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <tlbflush_direct>on</tlbflush_direct>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <tlbflush_extended>on</tlbflush_extended>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </defaults>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </hyperv>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <launchSecurity supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='sectype'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>tdx</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </launchSecurity>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  </features>
Dec  1 05:09:46 np0005540825 nova_compute[256151]: </domainCapabilities>
Dec  1 05:09:46 np0005540825 nova_compute[256151]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.472 256155 DEBUG nova.virt.libvirt.host [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec  1 05:09:46 np0005540825 nova_compute[256151]: <domainCapabilities>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  <path>/usr/libexec/qemu-kvm</path>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  <domain>kvm</domain>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  <arch>i686</arch>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  <vcpu max='240'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  <iothreads supported='yes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  <os supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <enum name='firmware'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <loader supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='type'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>rom</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>pflash</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='readonly'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>yes</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>no</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='secure'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>no</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </loader>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  </os>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  <cpu>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <mode name='host-passthrough' supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='hostPassthroughMigratable'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>on</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>off</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </mode>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <mode name='maximum' supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='maximumMigratable'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>on</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>off</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </mode>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <mode name='host-model' supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <vendor>AMD</vendor>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='x2apic'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='tsc-deadline'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='hypervisor'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='tsc_adjust'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='spec-ctrl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='stibp'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='ssbd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='cmp_legacy'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='overflow-recov'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='succor'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='ibrs'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='amd-ssbd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='virt-ssbd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='lbrv'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='tsc-scale'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='vmcb-clean'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='flushbyasid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='pause-filter'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='pfthreshold'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='svme-addr-chk'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='disable' name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </mode>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <mode name='custom' supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Broadwell'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Broadwell-IBRS'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Broadwell-noTSX'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Broadwell-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Broadwell-v2'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Broadwell-v3'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Broadwell-v4'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Cascadelake-Server'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Cascadelake-Server-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Cascadelake-Server-v2'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Cascadelake-Server-v3'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Cascadelake-Server-v4'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Cascadelake-Server-v5'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Cooperlake'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-bf16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='taa-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Cooperlake-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-bf16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='taa-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Cooperlake-v2'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-bf16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='taa-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Denverton'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='mpx'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Denverton-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='mpx'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Denverton-v2'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Denverton-v3'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Dhyana-v2'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='EPYC-Genoa'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amd-psfd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='auto-ibrs'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-bf16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bitalg'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512ifma'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='la57'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='no-nested-data-bp'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='null-sel-clr-base'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='stibp-always-on'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vaes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='EPYC-Genoa-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amd-psfd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='auto-ibrs'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-bf16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bitalg'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512ifma'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='la57'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='no-nested-data-bp'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='null-sel-clr-base'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='stibp-always-on'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vaes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='EPYC-Milan'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='EPYC-Milan-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='EPYC-Milan-v2'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amd-psfd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='no-nested-data-bp'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='null-sel-clr-base'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='stibp-always-on'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vaes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='EPYC-Rome'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='EPYC-Rome-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='EPYC-Rome-v2'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='EPYC-Rome-v3'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='EPYC-v3'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='EPYC-v4'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='GraniteRapids'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-bf16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-fp16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-int8'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-tile'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx-vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-bf16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-fp16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bitalg'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512ifma'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fbsdp-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrc'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrs'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fzrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='la57'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='mcdt-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pbrsb-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='prefetchiti'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='psdp-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='sbdr-ssdp-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='serialize'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='taa-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='tsx-ldtrk'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vaes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xfd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='GraniteRapids-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-bf16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-fp16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-int8'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-tile'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx-vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-bf16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-fp16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bitalg'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512ifma'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fbsdp-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrc'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrs'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fzrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='la57'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='mcdt-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pbrsb-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='prefetchiti'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='psdp-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='sbdr-ssdp-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='serialize'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='taa-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='tsx-ldtrk'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vaes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xfd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='GraniteRapids-v2'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-bf16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-fp16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-int8'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-tile'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx-vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx10'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx10-128'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx10-256'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx10-512'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-bf16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-fp16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bitalg'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512ifma'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='cldemote'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fbsdp-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrc'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrs'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fzrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='la57'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='mcdt-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='movdir64b'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='movdiri'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pbrsb-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='prefetchiti'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='psdp-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='sbdr-ssdp-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='serialize'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ss'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='taa-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='tsx-ldtrk'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vaes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xfd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Haswell'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Haswell-IBRS'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Haswell-noTSX'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Haswell-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Haswell-v2'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Haswell-v3'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Haswell-v4'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Icelake-Server'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bitalg'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='la57'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vaes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Icelake-Server-noTSX'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bitalg'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='la57'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vaes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Icelake-Server-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bitalg'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='la57'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vaes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Icelake-Server-v2'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bitalg'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='la57'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vaes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Icelake-Server-v3'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bitalg'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='la57'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='taa-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vaes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Icelake-Server-v4'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bitalg'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512ifma'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='la57'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='taa-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vaes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Icelake-Server-v5'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bitalg'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512ifma'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='la57'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='taa-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vaes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Icelake-Server-v6'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bitalg'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512ifma'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='la57'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='taa-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vaes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Icelake-Server-v7'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bitalg'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512ifma'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='la57'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='taa-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vaes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='IvyBridge'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='IvyBridge-IBRS'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='IvyBridge-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='IvyBridge-v2'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='KnightsMill'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-4fmaps'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-4vnniw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512er'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512pf'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ss'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='KnightsMill-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-4fmaps'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-4vnniw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512er'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512pf'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ss'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Opteron_G4'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fma4'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xop'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Opteron_G4-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fma4'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xop'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Opteron_G5'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fma4'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='tbm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xop'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Opteron_G5-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fma4'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='tbm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xop'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='SapphireRapids'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-bf16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-int8'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-tile'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx-vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-bf16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-fp16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bitalg'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512ifma'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrc'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrs'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fzrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='la57'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='serialize'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='taa-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='tsx-ldtrk'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vaes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xfd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='SapphireRapids-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-bf16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-int8'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-tile'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx-vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-bf16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-fp16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bitalg'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512ifma'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrc'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrs'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fzrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='la57'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='serialize'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='taa-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='tsx-ldtrk'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vaes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xfd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='SapphireRapids-v2'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-bf16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-int8'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-tile'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx-vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-bf16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-fp16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bitalg'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512ifma'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fbsdp-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrc'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrs'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fzrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='la57'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='psdp-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='sbdr-ssdp-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='serialize'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='taa-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='tsx-ldtrk'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vaes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xfd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='SapphireRapids-v3'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-bf16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-int8'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='amx-tile'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx-vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-bf16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-fp16'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512-vpopcntdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bitalg'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512ifma'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vbmi2'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='cldemote'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fbsdp-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrc'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrs'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fzrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='la57'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='movdir64b'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='movdiri'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='psdp-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='sbdr-ssdp-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='serialize'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ss'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='taa-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='tsx-ldtrk'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vaes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xfd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='SierraForest'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx-ifma'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx-ne-convert'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx-vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx-vnni-int8'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='cmpccxadd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fbsdp-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrs'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='mcdt-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pbrsb-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='psdp-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='sbdr-ssdp-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='serialize'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vaes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='SierraForest-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx-ifma'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx-ne-convert'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx-vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx-vnni-int8'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='bus-lock-detect'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='cmpccxadd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fbsdp-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='fsrs'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='mcdt-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pbrsb-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='psdp-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='sbdr-ssdp-no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='serialize'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vaes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='vpclmulqdq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Skylake-Client'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Skylake-Client-IBRS'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Skylake-Client-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Skylake-Client-v2'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Skylake-Client-v3'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Skylake-Client-v4'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Skylake-Server'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Skylake-Server-IBRS'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Skylake-Server-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Skylake-Server-v2'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Skylake-Server-v3'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Skylake-Server-v4'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Skylake-Server-v5'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Snowridge'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='cldemote'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='core-capability'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='movdir64b'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='movdiri'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='mpx'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='split-lock-detect'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Snowridge-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='cldemote'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='core-capability'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='movdir64b'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='movdiri'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='mpx'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='split-lock-detect'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Snowridge-v2'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='cldemote'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='core-capability'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='movdir64b'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='movdiri'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='split-lock-detect'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Snowridge-v3'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='cldemote'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='core-capability'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='movdir64b'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='movdiri'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='split-lock-detect'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Snowridge-v4'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='cldemote'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='gfni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='movdir64b'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='movdiri'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='athlon'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='3dnow'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='3dnowext'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='athlon-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='3dnow'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='3dnowext'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='core2duo'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ss'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='core2duo-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ss'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='coreduo'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ss'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='coreduo-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ss'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='n270'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ss'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='n270-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ss'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='phenom'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='3dnow'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='3dnowext'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='phenom-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='3dnow'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='3dnowext'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </mode>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  </cpu>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  <memoryBacking supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <enum name='sourceType'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <value>file</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <value>anonymous</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <value>memfd</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  </memoryBacking>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  <devices>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <disk supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='diskDevice'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>disk</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>cdrom</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>floppy</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>lun</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='bus'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>ide</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>fdc</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>scsi</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>virtio</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>usb</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>sata</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='model'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>virtio</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>virtio-transitional</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>virtio-non-transitional</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </disk>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <graphics supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='type'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>vnc</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>egl-headless</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>dbus</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </graphics>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <video supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='modelType'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>vga</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>cirrus</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>virtio</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>none</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>bochs</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>ramfb</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </video>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <hostdev supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='mode'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>subsystem</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='startupPolicy'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>default</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>mandatory</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>requisite</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>optional</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='subsysType'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>usb</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>pci</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>scsi</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='capsType'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='pciBackend'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </hostdev>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <rng supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='model'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>virtio</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>virtio-transitional</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>virtio-non-transitional</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='backendModel'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>random</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>egd</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>builtin</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </rng>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <filesystem supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='driverType'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>path</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>handle</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>virtiofs</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </filesystem>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <tpm supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='model'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>tpm-tis</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>tpm-crb</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='backendModel'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>emulator</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>external</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='backendVersion'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>2.0</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </tpm>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <redirdev supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='bus'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>usb</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </redirdev>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <channel supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='type'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>pty</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>unix</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </channel>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <crypto supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='model'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='type'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>qemu</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='backendModel'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>builtin</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </crypto>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <interface supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='backendType'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>default</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>passt</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </interface>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <panic supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='model'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>isa</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>hyperv</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </panic>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <console supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='type'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>null</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>vc</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>pty</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>dev</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>file</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>pipe</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>stdio</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>udp</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>tcp</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>unix</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>qemu-vdagent</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>dbus</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </console>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  </devices>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  <features>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <gic supported='no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <vmcoreinfo supported='yes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <genid supported='yes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <backingStoreInput supported='yes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <backup supported='yes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <async-teardown supported='yes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <ps2 supported='yes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <sev supported='no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <sgx supported='no'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <hyperv supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='features'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>relaxed</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>vapic</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>spinlocks</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>vpindex</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>runtime</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>synic</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>stimer</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>reset</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>vendor_id</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>frequencies</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>reenlightenment</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>tlbflush</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>ipi</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>avic</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>emsr_bitmap</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>xmm_input</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <defaults>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <spinlocks>4095</spinlocks>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <stimer_direct>on</stimer_direct>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <tlbflush_direct>on</tlbflush_direct>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <tlbflush_extended>on</tlbflush_extended>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </defaults>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </hyperv>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <launchSecurity supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='sectype'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>tdx</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </launchSecurity>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  </features>
Dec  1 05:09:46 np0005540825 nova_compute[256151]: </domainCapabilities>
Dec  1 05:09:46 np0005540825 nova_compute[256151]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.499 256155 DEBUG nova.virt.libvirt.host [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  1 05:09:46 np0005540825 nova_compute[256151]: 2025-12-01 10:09:46.506 256155 DEBUG nova.virt.libvirt.host [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec  1 05:09:46 np0005540825 nova_compute[256151]: <domainCapabilities>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  <path>/usr/libexec/qemu-kvm</path>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  <domain>kvm</domain>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  <arch>x86_64</arch>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  <vcpu max='4096'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  <iothreads supported='yes'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  <os supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <enum name='firmware'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <value>efi</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <loader supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='type'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>rom</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>pflash</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='readonly'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>yes</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>no</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='secure'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>yes</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>no</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </loader>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  </os>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:  <cpu>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <mode name='host-passthrough' supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='hostPassthroughMigratable'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>on</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>off</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </mode>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <mode name='maximum' supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <enum name='maximumMigratable'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>on</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <value>off</value>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </enum>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </mode>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <mode name='host-model' supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <vendor>AMD</vendor>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='x2apic'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='tsc-deadline'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='hypervisor'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='tsc_adjust'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='spec-ctrl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='stibp'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='ssbd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='cmp_legacy'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='overflow-recov'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='succor'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='ibrs'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='amd-ssbd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='virt-ssbd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='lbrv'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='tsc-scale'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='vmcb-clean'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='flushbyasid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='pause-filter'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='pfthreshold'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='svme-addr-chk'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <feature policy='disable' name='xsaves'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    </mode>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:    <mode name='custom' supported='yes'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Broadwell'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Broadwell-IBRS'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Broadwell-noTSX'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Broadwell-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Broadwell-v2'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Broadwell-v3'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Broadwell-v4'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Cascadelake-Server'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='ibrs-all'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Cascadelake-Server-v1'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='hle'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='invpcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pcid'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='pku'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='rtm'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      </blockers>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:      <blockers model='Cascadelake-Server-v2'>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512bw'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512cd'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512dq'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512f'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vl'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='avx512vnni'/>
Dec  1 05:09:46 np0005540825 nova_compute[256151]:        <feature name='erms'/>
Dec  1 05:11:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:11:08.964Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:11:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:11:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:11:09.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:11:09 np0005540825 rsyslogd[1006]: imjournal: 4007 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Dec  1 05:11:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:09 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7214000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:11:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:11:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:11:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:11:09.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:11:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:11:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:11:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:11:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:11:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:11:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:11:09 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v606: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 0 B/s wr, 159 op/s
Dec  1 05:11:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:11:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:10 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7214000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:10 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7214000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:11:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:11:11.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:11:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:11:11] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec  1 05:11:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:11:11] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec  1 05:11:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:11 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7208004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:11:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:11:11.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:11:11 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v607: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 0 B/s wr, 159 op/s
Dec  1 05:11:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:12 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7208004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:12 np0005540825 podman[257750]: 2025-12-01 10:11:12.214808099 +0000 UTC m=+0.071012732 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  1 05:11:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:12 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71fc000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:11:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:11:13.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:11:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:13 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7230001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:11:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:11:13.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:11:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:11:13.595Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:11:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:11:13.595Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:11:13 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v608: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:11:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:14 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7230001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:14 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7208004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:11:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:11:15.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:11:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:11:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:15 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71fc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:11:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:11:15.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:11:15 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v609: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  1 05:11:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:16 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7230001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:16 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7230001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:11:17.152Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:11:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:11:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:11:17.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:11:17 np0005540825 podman[257773]: 2025-12-01 10:11:17.238809192 +0000 UTC m=+0.097321148 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:11:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:17 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7208004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:11:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:11:17.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:11:17 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v610: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:11:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:18 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7208004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:18 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71f4000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:11:18.966Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:11:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:11:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:11:19.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:11:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:19 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7230001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:11:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:11:19.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:11:19 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v611: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:11:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:20 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7230001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:11:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:20 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7208004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Dec  1 05:11:20 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2861615114' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Dec  1 05:11:20 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.14964 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec  1 05:11:20 np0005540825 ceph-mgr[74709]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  1 05:11:20 np0005540825 ceph-mgr[74709]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  1 05:11:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:11:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:11:21.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:11:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:11:21] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec  1 05:11:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:11:21] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec  1 05:11:21 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.24515 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec  1 05:11:21 np0005540825 ceph-mgr[74709]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  1 05:11:21 np0005540825 ceph-mgr[74709]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  1 05:11:21 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.24515 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Dec  1 05:11:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:21 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7200004020 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:11:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:11:21.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:11:21 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v612: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  1 05:11:22 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:22 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71f40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:22 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:22 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71f40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:11:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:11:23.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:11:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:11:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:11:23.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:11:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:11:23.596Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:11:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:11:23.596Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:11:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:11:23.596Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:11:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:23 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7208004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:23 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v613: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:11:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:24 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7200004040 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/101124 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 05:11:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:24 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7200004040 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:11:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:11:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 05:11:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:11:24 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v614: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 292 B/s rd, 0 op/s
Dec  1 05:11:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 05:11:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:11:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:11:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:11:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 05:11:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:11:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 05:11:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 05:11:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 05:11:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:11:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:11:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:11:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:11:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:11:25.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:11:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:11:25 np0005540825 podman[257997]: 2025-12-01 10:11:25.306384084 +0000 UTC m=+0.058441571 container create 4b0c9e05b018418e4d01f7c9d14fbc4541b5fe3610a9d4cce6bebe233ee3d74c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  1 05:11:25 np0005540825 systemd[1]: Started libpod-conmon-4b0c9e05b018418e4d01f7c9d14fbc4541b5fe3610a9d4cce6bebe233ee3d74c.scope.
Dec  1 05:11:25 np0005540825 podman[257997]: 2025-12-01 10:11:25.2794135 +0000 UTC m=+0.031471037 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:11:25 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:11:25 np0005540825 podman[257997]: 2025-12-01 10:11:25.404952364 +0000 UTC m=+0.157009901 container init 4b0c9e05b018418e4d01f7c9d14fbc4541b5fe3610a9d4cce6bebe233ee3d74c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  1 05:11:25 np0005540825 podman[257997]: 2025-12-01 10:11:25.416495728 +0000 UTC m=+0.168553205 container start 4b0c9e05b018418e4d01f7c9d14fbc4541b5fe3610a9d4cce6bebe233ee3d74c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_cohen, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Dec  1 05:11:25 np0005540825 podman[257997]: 2025-12-01 10:11:25.422477221 +0000 UTC m=+0.174534768 container attach 4b0c9e05b018418e4d01f7c9d14fbc4541b5fe3610a9d4cce6bebe233ee3d74c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:11:25 np0005540825 naughty_cohen[258014]: 167 167
Dec  1 05:11:25 np0005540825 systemd[1]: libpod-4b0c9e05b018418e4d01f7c9d14fbc4541b5fe3610a9d4cce6bebe233ee3d74c.scope: Deactivated successfully.
Dec  1 05:11:25 np0005540825 podman[257997]: 2025-12-01 10:11:25.424571808 +0000 UTC m=+0.176629315 container died 4b0c9e05b018418e4d01f7c9d14fbc4541b5fe3610a9d4cce6bebe233ee3d74c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:11:25 np0005540825 systemd[1]: var-lib-containers-storage-overlay-b6938bd35e773cec1147afbd1195b619a04d03732135800cfd160cf5edf2b36d-merged.mount: Deactivated successfully.
Dec  1 05:11:25 np0005540825 podman[257997]: 2025-12-01 10:11:25.477058465 +0000 UTC m=+0.229115922 container remove 4b0c9e05b018418e4d01f7c9d14fbc4541b5fe3610a9d4cce6bebe233ee3d74c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325)
Dec  1 05:11:25 np0005540825 systemd[1]: libpod-conmon-4b0c9e05b018418e4d01f7c9d14fbc4541b5fe3610a9d4cce6bebe233ee3d74c.scope: Deactivated successfully.
Dec  1 05:11:25 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:25 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71f40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:11:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:11:25.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:11:25 np0005540825 podman[258040]: 2025-12-01 10:11:25.721645256 +0000 UTC m=+0.070913069 container create bef58a1706399caec74185f97f504edc870b2de989cf51ced3c609a2e810a4a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_hermann, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True)
Dec  1 05:11:25 np0005540825 systemd[1]: Started libpod-conmon-bef58a1706399caec74185f97f504edc870b2de989cf51ced3c609a2e810a4a4.scope.
Dec  1 05:11:25 np0005540825 podman[258040]: 2025-12-01 10:11:25.69234372 +0000 UTC m=+0.041611583 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:11:25 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:11:25 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25fe8f1f92a1601d105531a446eb50c486d5b0271bf6a04590ee96d2acad9bce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:11:25 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25fe8f1f92a1601d105531a446eb50c486d5b0271bf6a04590ee96d2acad9bce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:11:25 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25fe8f1f92a1601d105531a446eb50c486d5b0271bf6a04590ee96d2acad9bce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:11:25 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25fe8f1f92a1601d105531a446eb50c486d5b0271bf6a04590ee96d2acad9bce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:11:25 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25fe8f1f92a1601d105531a446eb50c486d5b0271bf6a04590ee96d2acad9bce/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:11:25 np0005540825 podman[258040]: 2025-12-01 10:11:25.840833508 +0000 UTC m=+0.190101321 container init bef58a1706399caec74185f97f504edc870b2de989cf51ced3c609a2e810a4a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_hermann, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec  1 05:11:25 np0005540825 podman[258040]: 2025-12-01 10:11:25.85598026 +0000 UTC m=+0.205248073 container start bef58a1706399caec74185f97f504edc870b2de989cf51ced3c609a2e810a4a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  1 05:11:25 np0005540825 podman[258040]: 2025-12-01 10:11:25.859993359 +0000 UTC m=+0.209261182 container attach bef58a1706399caec74185f97f504edc870b2de989cf51ced3c609a2e810a4a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_hermann, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec  1 05:11:25 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:11:25 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:11:25 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:11:25 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:11:26 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:26 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7208004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:26 np0005540825 frosty_hermann[258057]: --> passed data devices: 0 physical, 1 LVM
Dec  1 05:11:26 np0005540825 frosty_hermann[258057]: --> All data devices are unavailable
Dec  1 05:11:26 np0005540825 systemd[1]: libpod-bef58a1706399caec74185f97f504edc870b2de989cf51ced3c609a2e810a4a4.scope: Deactivated successfully.
Dec  1 05:11:26 np0005540825 podman[258040]: 2025-12-01 10:11:26.297841636 +0000 UTC m=+0.647109449 container died bef58a1706399caec74185f97f504edc870b2de989cf51ced3c609a2e810a4a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_hermann, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec  1 05:11:26 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:26 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f723000a650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:26 np0005540825 systemd[1]: var-lib-containers-storage-overlay-25fe8f1f92a1601d105531a446eb50c486d5b0271bf6a04590ee96d2acad9bce-merged.mount: Deactivated successfully.
Dec  1 05:11:26 np0005540825 podman[258040]: 2025-12-01 10:11:26.362033521 +0000 UTC m=+0.711301324 container remove bef58a1706399caec74185f97f504edc870b2de989cf51ced3c609a2e810a4a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_hermann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Dec  1 05:11:26 np0005540825 systemd[1]: libpod-conmon-bef58a1706399caec74185f97f504edc870b2de989cf51ced3c609a2e810a4a4.scope: Deactivated successfully.
Dec  1 05:11:26 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v615: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 194 B/s rd, 0 op/s
Dec  1 05:11:27 np0005540825 podman[258176]: 2025-12-01 10:11:27.152964149 +0000 UTC m=+0.068613677 container create 1202025d09ca52cf3d4280ca842bd34dd0eaea500ddcbdf41db747a128e3b986 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:11:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:11:27.152Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:11:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:11:27.154Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:11:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:11:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:11:27.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:11:27 np0005540825 systemd[1]: Started libpod-conmon-1202025d09ca52cf3d4280ca842bd34dd0eaea500ddcbdf41db747a128e3b986.scope.
Dec  1 05:11:27 np0005540825 podman[258176]: 2025-12-01 10:11:27.124392812 +0000 UTC m=+0.040042390 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:11:27 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:11:27 np0005540825 podman[258176]: 2025-12-01 10:11:27.25483906 +0000 UTC m=+0.170488638 container init 1202025d09ca52cf3d4280ca842bd34dd0eaea500ddcbdf41db747a128e3b986 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_albattani, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:11:27 np0005540825 podman[258176]: 2025-12-01 10:11:27.265030757 +0000 UTC m=+0.180680285 container start 1202025d09ca52cf3d4280ca842bd34dd0eaea500ddcbdf41db747a128e3b986 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_albattani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  1 05:11:27 np0005540825 podman[258176]: 2025-12-01 10:11:27.269608271 +0000 UTC m=+0.185257799 container attach 1202025d09ca52cf3d4280ca842bd34dd0eaea500ddcbdf41db747a128e3b986 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_albattani, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:11:27 np0005540825 thirsty_albattani[258193]: 167 167
Dec  1 05:11:27 np0005540825 systemd[1]: libpod-1202025d09ca52cf3d4280ca842bd34dd0eaea500ddcbdf41db747a128e3b986.scope: Deactivated successfully.
Dec  1 05:11:27 np0005540825 podman[258176]: 2025-12-01 10:11:27.27505829 +0000 UTC m=+0.190707808 container died 1202025d09ca52cf3d4280ca842bd34dd0eaea500ddcbdf41db747a128e3b986 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_albattani, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec  1 05:11:27 np0005540825 podman[258176]: 2025-12-01 10:11:27.330256401 +0000 UTC m=+0.245905929 container remove 1202025d09ca52cf3d4280ca842bd34dd0eaea500ddcbdf41db747a128e3b986 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_albattani, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:11:27 np0005540825 systemd[1]: var-lib-containers-storage-overlay-05eaed13210cb29f02452a0fa44ed891bd4502fb45f2a28fccd1ad7f69f56a00-merged.mount: Deactivated successfully.
Dec  1 05:11:27 np0005540825 systemd[1]: libpod-conmon-1202025d09ca52cf3d4280ca842bd34dd0eaea500ddcbdf41db747a128e3b986.scope: Deactivated successfully.
Dec  1 05:11:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:27 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f723000a650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:27 np0005540825 podman[258205]: 2025-12-01 10:11:27.53358159 +0000 UTC m=+0.181608740 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  1 05:11:27 np0005540825 podman[258239]: 2025-12-01 10:11:27.569528598 +0000 UTC m=+0.066799808 container create 9b6b1a7d10a162d86265facf5d71e3180324c60fa7a24c05e8c4d79e6eee3819 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_wiles, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec  1 05:11:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:11:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:11:27.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:11:27 np0005540825 podman[258239]: 2025-12-01 10:11:27.542435331 +0000 UTC m=+0.039706541 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:11:27 np0005540825 systemd[1]: Started libpod-conmon-9b6b1a7d10a162d86265facf5d71e3180324c60fa7a24c05e8c4d79e6eee3819.scope.
Dec  1 05:11:27 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:11:27 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8647a1c0ba2eb2db8e5dca1cb783d8e71e4c76a210157198cb066faa0b00f64a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:11:27 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8647a1c0ba2eb2db8e5dca1cb783d8e71e4c76a210157198cb066faa0b00f64a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:11:27 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8647a1c0ba2eb2db8e5dca1cb783d8e71e4c76a210157198cb066faa0b00f64a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:11:27 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8647a1c0ba2eb2db8e5dca1cb783d8e71e4c76a210157198cb066faa0b00f64a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:11:27 np0005540825 podman[258239]: 2025-12-01 10:11:27.710495671 +0000 UTC m=+0.207766941 container init 9b6b1a7d10a162d86265facf5d71e3180324c60fa7a24c05e8c4d79e6eee3819 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:11:27 np0005540825 podman[258239]: 2025-12-01 10:11:27.723629248 +0000 UTC m=+0.220900468 container start 9b6b1a7d10a162d86265facf5d71e3180324c60fa7a24c05e8c4d79e6eee3819 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:11:27 np0005540825 podman[258239]: 2025-12-01 10:11:27.72885027 +0000 UTC m=+0.226121540 container attach 9b6b1a7d10a162d86265facf5d71e3180324c60fa7a24c05e8c4d79e6eee3819 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  1 05:11:28 np0005540825 practical_wiles[258260]: {
Dec  1 05:11:28 np0005540825 practical_wiles[258260]:    "1": [
Dec  1 05:11:28 np0005540825 practical_wiles[258260]:        {
Dec  1 05:11:28 np0005540825 practical_wiles[258260]:            "devices": [
Dec  1 05:11:28 np0005540825 practical_wiles[258260]:                "/dev/loop3"
Dec  1 05:11:28 np0005540825 practical_wiles[258260]:            ],
Dec  1 05:11:28 np0005540825 practical_wiles[258260]:            "lv_name": "ceph_lv0",
Dec  1 05:11:28 np0005540825 practical_wiles[258260]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:11:28 np0005540825 practical_wiles[258260]:            "lv_size": "21470642176",
Dec  1 05:11:28 np0005540825 practical_wiles[258260]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 05:11:28 np0005540825 practical_wiles[258260]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:11:28 np0005540825 practical_wiles[258260]:            "name": "ceph_lv0",
Dec  1 05:11:28 np0005540825 practical_wiles[258260]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:11:28 np0005540825 practical_wiles[258260]:            "tags": {
Dec  1 05:11:28 np0005540825 practical_wiles[258260]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:11:28 np0005540825 practical_wiles[258260]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:11:28 np0005540825 practical_wiles[258260]:                "ceph.cephx_lockbox_secret": "",
Dec  1 05:11:28 np0005540825 practical_wiles[258260]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 05:11:28 np0005540825 practical_wiles[258260]:                "ceph.cluster_name": "ceph",
Dec  1 05:11:28 np0005540825 practical_wiles[258260]:                "ceph.crush_device_class": "",
Dec  1 05:11:28 np0005540825 practical_wiles[258260]:                "ceph.encrypted": "0",
Dec  1 05:11:28 np0005540825 practical_wiles[258260]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 05:11:28 np0005540825 practical_wiles[258260]:                "ceph.osd_id": "1",
Dec  1 05:11:28 np0005540825 practical_wiles[258260]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 05:11:28 np0005540825 practical_wiles[258260]:                "ceph.type": "block",
Dec  1 05:11:28 np0005540825 practical_wiles[258260]:                "ceph.vdo": "0",
Dec  1 05:11:28 np0005540825 practical_wiles[258260]:                "ceph.with_tpm": "0"
Dec  1 05:11:28 np0005540825 practical_wiles[258260]:            },
Dec  1 05:11:28 np0005540825 practical_wiles[258260]:            "type": "block",
Dec  1 05:11:28 np0005540825 practical_wiles[258260]:            "vg_name": "ceph_vg0"
Dec  1 05:11:28 np0005540825 practical_wiles[258260]:        }
Dec  1 05:11:28 np0005540825 practical_wiles[258260]:    ]
Dec  1 05:11:28 np0005540825 practical_wiles[258260]: }
Dec  1 05:11:28 np0005540825 systemd[1]: libpod-9b6b1a7d10a162d86265facf5d71e3180324c60fa7a24c05e8c4d79e6eee3819.scope: Deactivated successfully.
Dec  1 05:11:28 np0005540825 podman[258239]: 2025-12-01 10:11:28.124751817 +0000 UTC m=+0.622023067 container died 9b6b1a7d10a162d86265facf5d71e3180324c60fa7a24c05e8c4d79e6eee3819 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  1 05:11:28 np0005540825 systemd[1]: var-lib-containers-storage-overlay-8647a1c0ba2eb2db8e5dca1cb783d8e71e4c76a210157198cb066faa0b00f64a-merged.mount: Deactivated successfully.
Dec  1 05:11:28 np0005540825 podman[258239]: 2025-12-01 10:11:28.187048681 +0000 UTC m=+0.684319861 container remove 9b6b1a7d10a162d86265facf5d71e3180324c60fa7a24c05e8c4d79e6eee3819 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_wiles, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:11:28 np0005540825 systemd[1]: libpod-conmon-9b6b1a7d10a162d86265facf5d71e3180324c60fa7a24c05e8c4d79e6eee3819.scope: Deactivated successfully.
Dec  1 05:11:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:28 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7200004060 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:28 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71fc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:28 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v616: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 194 B/s rd, 0 op/s
Dec  1 05:11:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:11:28.967Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:11:28 np0005540825 podman[258375]: 2025-12-01 10:11:28.980966491 +0000 UTC m=+0.053489166 container create 04bf89e80a55e174495f9c785cee747bc7eb56081c6825e9a510d950fdaa3355 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_einstein, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  1 05:11:29 np0005540825 systemd[1]: Started libpod-conmon-04bf89e80a55e174495f9c785cee747bc7eb56081c6825e9a510d950fdaa3355.scope.
Dec  1 05:11:29 np0005540825 podman[258375]: 2025-12-01 10:11:28.958219592 +0000 UTC m=+0.030742257 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:11:29 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:11:29 np0005540825 podman[258375]: 2025-12-01 10:11:29.083751046 +0000 UTC m=+0.156273751 container init 04bf89e80a55e174495f9c785cee747bc7eb56081c6825e9a510d950fdaa3355 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid)
Dec  1 05:11:29 np0005540825 podman[258375]: 2025-12-01 10:11:29.095475715 +0000 UTC m=+0.167998380 container start 04bf89e80a55e174495f9c785cee747bc7eb56081c6825e9a510d950fdaa3355 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_einstein, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid)
Dec  1 05:11:29 np0005540825 goofy_einstein[258392]: 167 167
Dec  1 05:11:29 np0005540825 podman[258375]: 2025-12-01 10:11:29.103473912 +0000 UTC m=+0.175996637 container attach 04bf89e80a55e174495f9c785cee747bc7eb56081c6825e9a510d950fdaa3355 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec  1 05:11:29 np0005540825 systemd[1]: libpod-04bf89e80a55e174495f9c785cee747bc7eb56081c6825e9a510d950fdaa3355.scope: Deactivated successfully.
Dec  1 05:11:29 np0005540825 podman[258375]: 2025-12-01 10:11:29.104963143 +0000 UTC m=+0.177485798 container died 04bf89e80a55e174495f9c785cee747bc7eb56081c6825e9a510d950fdaa3355 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:11:29 np0005540825 systemd[1]: var-lib-containers-storage-overlay-1e0b2e1aeafe66ac3a90add352fcc785574932c67c689d5ef9173c2ab8659ec6-merged.mount: Deactivated successfully.
Dec  1 05:11:29 np0005540825 podman[258375]: 2025-12-01 10:11:29.152642479 +0000 UTC m=+0.225165144 container remove 04bf89e80a55e174495f9c785cee747bc7eb56081c6825e9a510d950fdaa3355 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  1 05:11:29 np0005540825 systemd[1]: libpod-conmon-04bf89e80a55e174495f9c785cee747bc7eb56081c6825e9a510d950fdaa3355.scope: Deactivated successfully.
Dec  1 05:11:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:11:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:11:29.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:11:29 np0005540825 podman[258415]: 2025-12-01 10:11:29.413654327 +0000 UTC m=+0.067172787 container create 701d2a83729a60acba4270ef076c502e9038d7961413713e22c8365aa8823c51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_wing, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  1 05:11:29 np0005540825 systemd[1]: Started libpod-conmon-701d2a83729a60acba4270ef076c502e9038d7961413713e22c8365aa8823c51.scope.
Dec  1 05:11:29 np0005540825 podman[258415]: 2025-12-01 10:11:29.387136966 +0000 UTC m=+0.040655506 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:11:29 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:11:29 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c98f2842eda58d85b7f1da0a08a37cffc7cd909753731f08a9c52d83211b214f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:11:29 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c98f2842eda58d85b7f1da0a08a37cffc7cd909753731f08a9c52d83211b214f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:11:29 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c98f2842eda58d85b7f1da0a08a37cffc7cd909753731f08a9c52d83211b214f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:11:29 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c98f2842eda58d85b7f1da0a08a37cffc7cd909753731f08a9c52d83211b214f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:11:29 np0005540825 podman[258415]: 2025-12-01 10:11:29.51966679 +0000 UTC m=+0.173185310 container init 701d2a83729a60acba4270ef076c502e9038d7961413713e22c8365aa8823c51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_wing, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  1 05:11:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/101129 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 05:11:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:29 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71f4002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:29 np0005540825 podman[258415]: 2025-12-01 10:11:29.536182359 +0000 UTC m=+0.189700809 container start 701d2a83729a60acba4270ef076c502e9038d7961413713e22c8365aa8823c51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_wing, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:11:29 np0005540825 podman[258415]: 2025-12-01 10:11:29.540206389 +0000 UTC m=+0.193724879 container attach 701d2a83729a60acba4270ef076c502e9038d7961413713e22c8365aa8823c51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_wing, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec  1 05:11:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:11:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:11:29.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:11:30 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:30 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f723000a650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:30 np0005540825 lvm[258509]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 05:11:30 np0005540825 lvm[258509]: VG ceph_vg0 finished
Dec  1 05:11:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:11:30 np0005540825 affectionate_wing[258432]: {}
Dec  1 05:11:30 np0005540825 systemd[1]: libpod-701d2a83729a60acba4270ef076c502e9038d7961413713e22c8365aa8823c51.scope: Deactivated successfully.
Dec  1 05:11:30 np0005540825 systemd[1]: libpod-701d2a83729a60acba4270ef076c502e9038d7961413713e22c8365aa8823c51.scope: Consumed 1.238s CPU time.
Dec  1 05:11:30 np0005540825 podman[258415]: 2025-12-01 10:11:30.307208686 +0000 UTC m=+0.960727226 container died 701d2a83729a60acba4270ef076c502e9038d7961413713e22c8365aa8823c51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_wing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  1 05:11:30 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:30 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7200004080 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:30 np0005540825 systemd[1]: var-lib-containers-storage-overlay-c98f2842eda58d85b7f1da0a08a37cffc7cd909753731f08a9c52d83211b214f-merged.mount: Deactivated successfully.
Dec  1 05:11:30 np0005540825 podman[258415]: 2025-12-01 10:11:30.36948634 +0000 UTC m=+1.023004820 container remove 701d2a83729a60acba4270ef076c502e9038d7961413713e22c8365aa8823c51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_wing, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 05:11:30 np0005540825 systemd[1]: libpod-conmon-701d2a83729a60acba4270ef076c502e9038d7961413713e22c8365aa8823c51.scope: Deactivated successfully.
Dec  1 05:11:30 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v617: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 194 B/s rd, 0 op/s
Dec  1 05:11:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 05:11:30 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:11:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 05:11:30 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:11:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:11:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:11:31.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:11:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:11:31] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Dec  1 05:11:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:11:31] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Dec  1 05:11:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:31 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71fc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:11:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:11:31.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:11:31 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:11:31 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:11:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:32 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71f4002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:32 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f723000a650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:32 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v618: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 584 B/s rd, 97 B/s wr, 0 op/s
Dec  1 05:11:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:32 : epoch 692d696f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:11:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:11:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:11:33.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:11:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:33 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72000040a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:11:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:11:33.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:11:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:11:33.597Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:11:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:11:33.598Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:11:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:34 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71fc002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:34 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71f4002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:34 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v619: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 584 B/s rd, 97 B/s wr, 0 op/s
Dec  1 05:11:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:11:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:11:35.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:11:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:11:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:35 : epoch 692d696f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:11:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:35 : epoch 692d696f : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:11:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:35 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f723000a650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:11:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:11:35.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:11:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:35 : epoch 692d696f : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:11:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:35 : epoch 692d696f : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:11:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:36 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72000040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:36 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71fc002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:36 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v620: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 1 op/s
Dec  1 05:11:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:11:37.155Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:11:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:11:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:11:37.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:11:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:37 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71f4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:11:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:11:37.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:11:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:38 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f723000a650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:38 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72000040e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:38 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v621: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec  1 05:11:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:38 : epoch 692d696f : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  1 05:11:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:11:38.969Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:11:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:11:38.969Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:11:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:11:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:11:39.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_10:11:39
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'backups', '.nfs', 'default.rgw.control', 'images', 'default.rgw.log', 'default.rgw.meta', 'volumes']
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 05:11:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:11:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:11:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:39 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71fc002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:11:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:11:39.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:11:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:11:40 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:40 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71f4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:11:40 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:40 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f723000a650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:40 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v622: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec  1 05:11:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:11:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:11:41.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:11:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:11:41] "GET /metrics HTTP/1.1" 200 48432 "" "Prometheus/2.51.0"
Dec  1 05:11:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:11:41] "GET /metrics HTTP/1.1" 200 48432 "" "Prometheus/2.51.0"
Dec  1 05:11:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:41 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7200004100 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:11:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:11:41.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:11:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:41 : epoch 692d696f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:11:42 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:42 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71fc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:42 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:42 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71f4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:42 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v623: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  1 05:11:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:11:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:11:43.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:11:43 np0005540825 podman[258588]: 2025-12-01 10:11:43.2347111 +0000 UTC m=+0.088090877 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 05:11:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:43 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f723000a650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:11:43.599Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:11:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:11:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:11:43.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:11:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:44 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7200004120 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/101144 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  1 05:11:44 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.24533 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec  1 05:11:44 np0005540825 ceph-mgr[74709]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  1 05:11:44 np0005540825 ceph-mgr[74709]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  1 05:11:44 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.24533 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Dec  1 05:11:44 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.24670 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec  1 05:11:44 np0005540825 ceph-mgr[74709]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  1 05:11:44 np0005540825 ceph-mgr[74709]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  1 05:11:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:44 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7214000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:44 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v624: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec  1 05:11:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:44 : epoch 692d696f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:11:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:44 : epoch 692d696f : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:11:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:11:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:11:45.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:11:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:11:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:45 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71f4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:11:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:11:45.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:11:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:46 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71f4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:46 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7200004120 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:46 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v625: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 1.7 KiB/s wr, 6 op/s
Dec  1 05:11:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:11:47.155Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:11:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:11:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:11:47.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:11:47 np0005540825 nova_compute[256151]: 2025-12-01 10:11:47.332 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:11:47 np0005540825 nova_compute[256151]: 2025-12-01 10:11:47.333 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:11:47 np0005540825 nova_compute[256151]: 2025-12-01 10:11:47.349 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:11:47 np0005540825 nova_compute[256151]: 2025-12-01 10:11:47.349 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 05:11:47 np0005540825 nova_compute[256151]: 2025-12-01 10:11:47.350 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 05:11:47 np0005540825 nova_compute[256151]: 2025-12-01 10:11:47.364 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 05:11:47 np0005540825 nova_compute[256151]: 2025-12-01 10:11:47.364 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:11:47 np0005540825 nova_compute[256151]: 2025-12-01 10:11:47.365 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:11:47 np0005540825 nova_compute[256151]: 2025-12-01 10:11:47.365 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:11:47 np0005540825 nova_compute[256151]: 2025-12-01 10:11:47.366 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:11:47 np0005540825 nova_compute[256151]: 2025-12-01 10:11:47.366 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:11:47 np0005540825 nova_compute[256151]: 2025-12-01 10:11:47.366 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:11:47 np0005540825 nova_compute[256151]: 2025-12-01 10:11:47.367 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 05:11:47 np0005540825 nova_compute[256151]: 2025-12-01 10:11:47.367 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:11:47 np0005540825 nova_compute[256151]: 2025-12-01 10:11:47.389 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:11:47 np0005540825 nova_compute[256151]: 2025-12-01 10:11:47.390 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:11:47 np0005540825 nova_compute[256151]: 2025-12-01 10:11:47.390 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:11:47 np0005540825 nova_compute[256151]: 2025-12-01 10:11:47.391 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 05:11:47 np0005540825 nova_compute[256151]: 2025-12-01 10:11:47.391 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:11:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:47 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7214001aa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:11:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:11:47.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:11:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:47 : epoch 692d696f : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  1 05:11:47 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:11:47 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1106534863' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:11:47 np0005540825 nova_compute[256151]: 2025-12-01 10:11:47.892 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:11:48 np0005540825 nova_compute[256151]: 2025-12-01 10:11:48.137 256155 WARNING nova.virt.libvirt.driver [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 05:11:48 np0005540825 nova_compute[256151]: 2025-12-01 10:11:48.139 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4932MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 05:11:48 np0005540825 nova_compute[256151]: 2025-12-01 10:11:48.140 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:11:48 np0005540825 nova_compute[256151]: 2025-12-01 10:11:48.140 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:11:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:48 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71f4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:48 np0005540825 podman[258638]: 2025-12-01 10:11:48.258776184 +0000 UTC m=+0.113791446 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  1 05:11:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:48 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f723000a650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:48 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v626: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Dec  1 05:11:48 np0005540825 nova_compute[256151]: 2025-12-01 10:11:48.550 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 05:11:48 np0005540825 nova_compute[256151]: 2025-12-01 10:11:48.550 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 05:11:48 np0005540825 nova_compute[256151]: 2025-12-01 10:11:48.573 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:11:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:11:48.970Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:11:49 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:11:49 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/649293739' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:11:49 np0005540825 nova_compute[256151]: 2025-12-01 10:11:49.053 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:11:49 np0005540825 nova_compute[256151]: 2025-12-01 10:11:49.060 256155 DEBUG nova.compute.provider_tree [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed in ProviderTree for provider: 5efe20fe-1981-4bd9-8786-d9fddc89a5ae update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 05:11:49 np0005540825 nova_compute[256151]: 2025-12-01 10:11:49.191 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 05:11:49 np0005540825 nova_compute[256151]: 2025-12-01 10:11:49.194 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 05:11:49 np0005540825 nova_compute[256151]: 2025-12-01 10:11:49.194 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.054s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:11:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:11:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:11:49.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:11:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:49 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7200004120 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:11:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:11:49.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:11:50 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:50 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7214001aa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:11:50 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:50 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71f4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:50 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v627: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Dec  1 05:11:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:11:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:11:51.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:11:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:11:51] "GET /metrics HTTP/1.1" 200 48432 "" "Prometheus/2.51.0"
Dec  1 05:11:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:11:51] "GET /metrics HTTP/1.1" 200 48432 "" "Prometheus/2.51.0"
Dec  1 05:11:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/101151 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  1 05:11:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:51 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f723000a650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:11:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:11:51.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:11:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:52 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7200004120 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:52 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7214001aa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:52 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v628: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Dec  1 05:11:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:11:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:11:53.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:11:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:53 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71f4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:11:53.600Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:11:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:11:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:11:53.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:11:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:54 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f723000a650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:54 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7200004120 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:54 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v629: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Dec  1 05:11:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:11:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:11:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:11:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:11:55.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:11:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:11:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:55 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7214002ba0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:11:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:11:55.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:11:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:56 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71f4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:56 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f723000a650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:56 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v630: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Dec  1 05:11:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:11:57.156Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:11:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:11:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:11:57.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:11:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:57 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7200004120 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:11:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:11:57.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:11:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:58 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7214002ba0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:58 np0005540825 podman[258695]: 2025-12-01 10:11:58.277744126 +0000 UTC m=+0.134629429 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:11:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:58 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7208001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:58 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v631: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Dec  1 05:11:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:11:58.972Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:11:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:11:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:11:59.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:11:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:11:59 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f723000a650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:11:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:11:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:11:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:11:59.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:12:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:12:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:00 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7200004120 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:00 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7214002ba0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:00 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v632: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Dec  1 05:12:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:12:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:12:01.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:12:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:12:01] "GET /metrics HTTP/1.1" 200 48431 "" "Prometheus/2.51.0"
Dec  1 05:12:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:12:01] "GET /metrics HTTP/1.1" 200 48431 "" "Prometheus/2.51.0"
Dec  1 05:12:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:01 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7200004120 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:12:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:12:01.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:12:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:02 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7208001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:02 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f723000a650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:02 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v633: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec  1 05:12:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:12:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:12:03.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:12:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:03 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7214004030 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:12:03.602Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:12:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:12:03.602Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:12:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:12:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:12:03.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:12:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:04 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7200004120 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:04 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7208001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:04 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v634: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:12:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:12:04.566 163291 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:12:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:12:04.567 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:12:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:12:04.567 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:12:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:12:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:12:05.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:12:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:12:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:05 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f723000a650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:12:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:12:05.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:12:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:06 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7214004030 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:06 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7200004120 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:06 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v635: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:12:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:12:07.157Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:12:07 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:07 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:12:07 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:12:07.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:12:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:07 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f723000a650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:07 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:07 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:12:07 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:12:07.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:12:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:08 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7208001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:08 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7214004030 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:08 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v636: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:12:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:12:08.973Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:12:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:12:08.973Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:12:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:12:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:12:09.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:12:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:12:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:12:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:09 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7200004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:12:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:12:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:12:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:12:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:12:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:12:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:12:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:12:09.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:12:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:12:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:10 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f723000a650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:10 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f723000a650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:10 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v637: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:12:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:12:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:12:11.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:12:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:12:11] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Dec  1 05:12:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:12:11] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Dec  1 05:12:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:11 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7214004030 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:12:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:12:11.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:12:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:12 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7200004160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:12 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7200004160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:12 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v638: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:12:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:12:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:12:13.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:12:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:13 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7208001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:12:13.604Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:12:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:12:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:12:13.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:12:14 np0005540825 podman[258762]: 2025-12-01 10:12:14.215800366 +0000 UTC m=+0.077894109 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  1 05:12:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:14 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7214004030 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:14 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7214004030 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:14 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v639: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:12:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:12:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:12:15.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:12:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:12:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:15 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f723000a690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:12:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:12:15.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:12:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:16 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f723000a690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:16 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71fc002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:16 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v640: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:12:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:12:17.158Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:12:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:12:17.158Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:12:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:12:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:12:17.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:12:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:17 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7214004030 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:12:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:12:17.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:12:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:18 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f723000a690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:18 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f723000a690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:18 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v641: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:12:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:12:18.974Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:12:19 np0005540825 podman[258787]: 2025-12-01 10:12:19.237233613 +0000 UTC m=+0.104878332 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 05:12:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:12:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:12:19.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:12:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:19 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71fc002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:12:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:12:19.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:12:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:12:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:20 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7214004030 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:20 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7214004030 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:20 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v642: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:12:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:12:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:12:21.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:12:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:12:21] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Dec  1 05:12:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:12:21] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Dec  1 05:12:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:21 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7208001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:12:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:12:21.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:12:22 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:22 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71fc002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:22 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:22 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71fc002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:22 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v643: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:12:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:12:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:12:23.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:12:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:23 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f723000a690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:12:23.606Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:12:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:12:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:12:23.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:12:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:24 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7208001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:24 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7208001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:24 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v644: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:12:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:12:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:12:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:12:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:12:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:12:25.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:12:25 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:25 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7214004030 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:12:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:12:25.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:12:26 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:26 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f723000a690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:26 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:26 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71fc002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:26 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v645: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:12:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:12:27.160Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:12:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:12:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:12:27.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:12:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:27 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7208004280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:12:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:12:27.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:12:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:28 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7214004030 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:28 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f723000a690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:28 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v646: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:12:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:12:28.975Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:12:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:12:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:12:29.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:12:29 np0005540825 podman[258842]: 2025-12-01 10:12:29.284761924 +0000 UTC m=+0.138512163 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  1 05:12:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:29 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71fc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:12:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:12:29.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:12:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:12:30 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:30 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7208004280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:30 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:30 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71f4001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:30 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v647: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:12:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:12:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:12:31.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:12:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:12:31] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec  1 05:12:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:12:31] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec  1 05:12:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:31 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f723000a740 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:12:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:12:31.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:12:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:12:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:12:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 05:12:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:12:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 05:12:32 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v648: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 440 B/s rd, 0 op/s
Dec  1 05:12:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:12:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 05:12:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:12:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 05:12:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 05:12:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 05:12:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:12:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:12:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:12:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:32 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71fc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/101232 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 05:12:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:32 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7208004280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:32 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:12:32 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:12:32 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:12:32 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:12:32 np0005540825 podman[259044]: 2025-12-01 10:12:32.905260702 +0000 UTC m=+0.065412684 container create d4893243bc750ca0e490ade3f458fd1bbeb160efc8c88757c0dd3b5f5b6f808f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_euler, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:12:32 np0005540825 systemd[1]: Started libpod-conmon-d4893243bc750ca0e490ade3f458fd1bbeb160efc8c88757c0dd3b5f5b6f808f.scope.
Dec  1 05:12:32 np0005540825 podman[259044]: 2025-12-01 10:12:32.876032829 +0000 UTC m=+0.036184791 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:12:32 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:12:33 np0005540825 podman[259044]: 2025-12-01 10:12:33.021917729 +0000 UTC m=+0.182069711 container init d4893243bc750ca0e490ade3f458fd1bbeb160efc8c88757c0dd3b5f5b6f808f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_euler, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec  1 05:12:33 np0005540825 podman[259044]: 2025-12-01 10:12:33.034993559 +0000 UTC m=+0.195145551 container start d4893243bc750ca0e490ade3f458fd1bbeb160efc8c88757c0dd3b5f5b6f808f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_euler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 05:12:33 np0005540825 podman[259044]: 2025-12-01 10:12:33.04210312 +0000 UTC m=+0.202255092 container attach d4893243bc750ca0e490ade3f458fd1bbeb160efc8c88757c0dd3b5f5b6f808f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_euler, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:12:33 np0005540825 gracious_euler[259058]: 167 167
Dec  1 05:12:33 np0005540825 systemd[1]: libpod-d4893243bc750ca0e490ade3f458fd1bbeb160efc8c88757c0dd3b5f5b6f808f.scope: Deactivated successfully.
Dec  1 05:12:33 np0005540825 podman[259044]: 2025-12-01 10:12:33.047279499 +0000 UTC m=+0.207431481 container died d4893243bc750ca0e490ade3f458fd1bbeb160efc8c88757c0dd3b5f5b6f808f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_euler, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:12:33 np0005540825 systemd[1]: var-lib-containers-storage-overlay-eccbaa8d68b0eb5a6656ec2aad0fd99ebd639c157f2c09d1b24c1dbb6c33719d-merged.mount: Deactivated successfully.
Dec  1 05:12:33 np0005540825 podman[259044]: 2025-12-01 10:12:33.10067255 +0000 UTC m=+0.260824492 container remove d4893243bc750ca0e490ade3f458fd1bbeb160efc8c88757c0dd3b5f5b6f808f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec  1 05:12:33 np0005540825 systemd[1]: libpod-conmon-d4893243bc750ca0e490ade3f458fd1bbeb160efc8c88757c0dd3b5f5b6f808f.scope: Deactivated successfully.
Dec  1 05:12:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:12:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:12:33.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:12:33 np0005540825 podman[259085]: 2025-12-01 10:12:33.352955972 +0000 UTC m=+0.075667379 container create 5e6c4d517344b2ab496b6e06d1da23dc67291fac519ccb8e8e44592e088e458f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_sammet, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:12:33 np0005540825 systemd[1]: Started libpod-conmon-5e6c4d517344b2ab496b6e06d1da23dc67291fac519ccb8e8e44592e088e458f.scope.
Dec  1 05:12:33 np0005540825 podman[259085]: 2025-12-01 10:12:33.32417947 +0000 UTC m=+0.046890937 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:12:33 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:12:33 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/955a05923f71e9ef2b37c3357ebc2365120c1104e2c29457ba773ff9623b433a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:12:33 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/955a05923f71e9ef2b37c3357ebc2365120c1104e2c29457ba773ff9623b433a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:12:33 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/955a05923f71e9ef2b37c3357ebc2365120c1104e2c29457ba773ff9623b433a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:12:33 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/955a05923f71e9ef2b37c3357ebc2365120c1104e2c29457ba773ff9623b433a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:12:33 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/955a05923f71e9ef2b37c3357ebc2365120c1104e2c29457ba773ff9623b433a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:12:33 np0005540825 podman[259085]: 2025-12-01 10:12:33.470375899 +0000 UTC m=+0.193087316 container init 5e6c4d517344b2ab496b6e06d1da23dc67291fac519ccb8e8e44592e088e458f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_sammet, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec  1 05:12:33 np0005540825 podman[259085]: 2025-12-01 10:12:33.485912295 +0000 UTC m=+0.208623682 container start 5e6c4d517344b2ab496b6e06d1da23dc67291fac519ccb8e8e44592e088e458f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  1 05:12:33 np0005540825 podman[259085]: 2025-12-01 10:12:33.490074777 +0000 UTC m=+0.212786194 container attach 5e6c4d517344b2ab496b6e06d1da23dc67291fac519ccb8e8e44592e088e458f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:12:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:33 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71f4001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:12:33.607Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:12:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:12:33.608Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:12:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:12:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:12:33.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:12:33 np0005540825 great_sammet[259102]: --> passed data devices: 0 physical, 1 LVM
Dec  1 05:12:33 np0005540825 great_sammet[259102]: --> All data devices are unavailable
Dec  1 05:12:33 np0005540825 systemd[1]: libpod-5e6c4d517344b2ab496b6e06d1da23dc67291fac519ccb8e8e44592e088e458f.scope: Deactivated successfully.
Dec  1 05:12:33 np0005540825 podman[259085]: 2025-12-01 10:12:33.936087181 +0000 UTC m=+0.658798598 container died 5e6c4d517344b2ab496b6e06d1da23dc67291fac519ccb8e8e44592e088e458f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:12:33 np0005540825 systemd[1]: var-lib-containers-storage-overlay-955a05923f71e9ef2b37c3357ebc2365120c1104e2c29457ba773ff9623b433a-merged.mount: Deactivated successfully.
Dec  1 05:12:33 np0005540825 podman[259085]: 2025-12-01 10:12:33.99908047 +0000 UTC m=+0.721791877 container remove 5e6c4d517344b2ab496b6e06d1da23dc67291fac519ccb8e8e44592e088e458f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_sammet, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  1 05:12:34 np0005540825 systemd[1]: libpod-conmon-5e6c4d517344b2ab496b6e06d1da23dc67291fac519ccb8e8e44592e088e458f.scope: Deactivated successfully.
Dec  1 05:12:34 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v649: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 175 B/s rd, 0 op/s
Dec  1 05:12:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:34 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f723000a740 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:34 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71fc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:34 np0005540825 podman[259222]: 2025-12-01 10:12:34.8303867 +0000 UTC m=+0.068529097 container create 4b28d3968541013e3af77e93e9f2c1338f301146c9bdca345df8807ef1808dff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_cerf, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:12:34 np0005540825 systemd[1]: Started libpod-conmon-4b28d3968541013e3af77e93e9f2c1338f301146c9bdca345df8807ef1808dff.scope.
Dec  1 05:12:34 np0005540825 podman[259222]: 2025-12-01 10:12:34.803756327 +0000 UTC m=+0.041898764 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:12:34 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:12:34 np0005540825 podman[259222]: 2025-12-01 10:12:34.920432524 +0000 UTC m=+0.158574921 container init 4b28d3968541013e3af77e93e9f2c1338f301146c9bdca345df8807ef1808dff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_cerf, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  1 05:12:34 np0005540825 podman[259222]: 2025-12-01 10:12:34.929339353 +0000 UTC m=+0.167481760 container start 4b28d3968541013e3af77e93e9f2c1338f301146c9bdca345df8807ef1808dff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_cerf, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:12:34 np0005540825 podman[259222]: 2025-12-01 10:12:34.933645588 +0000 UTC m=+0.171787985 container attach 4b28d3968541013e3af77e93e9f2c1338f301146c9bdca345df8807ef1808dff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec  1 05:12:34 np0005540825 determined_cerf[259239]: 167 167
Dec  1 05:12:34 np0005540825 systemd[1]: libpod-4b28d3968541013e3af77e93e9f2c1338f301146c9bdca345df8807ef1808dff.scope: Deactivated successfully.
Dec  1 05:12:34 np0005540825 podman[259222]: 2025-12-01 10:12:34.936816653 +0000 UTC m=+0.174959030 container died 4b28d3968541013e3af77e93e9f2c1338f301146c9bdca345df8807ef1808dff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_cerf, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:12:34 np0005540825 systemd[1]: var-lib-containers-storage-overlay-5b46d3d7ac5e37f90a7402a7bba21cbbc4c918d9924c9ab429f2292bb44fe602-merged.mount: Deactivated successfully.
Dec  1 05:12:34 np0005540825 podman[259222]: 2025-12-01 10:12:34.990460111 +0000 UTC m=+0.228602478 container remove 4b28d3968541013e3af77e93e9f2c1338f301146c9bdca345df8807ef1808dff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:12:35 np0005540825 systemd[1]: libpod-conmon-4b28d3968541013e3af77e93e9f2c1338f301146c9bdca345df8807ef1808dff.scope: Deactivated successfully.
Dec  1 05:12:35 np0005540825 podman[259262]: 2025-12-01 10:12:35.162122772 +0000 UTC m=+0.058550451 container create 9a4ac6710d03c925b2e2710c00f996f88f2559139129b651b0e9ff756ebd2baa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:12:35 np0005540825 systemd[1]: Started libpod-conmon-9a4ac6710d03c925b2e2710c00f996f88f2559139129b651b0e9ff756ebd2baa.scope.
Dec  1 05:12:35 np0005540825 podman[259262]: 2025-12-01 10:12:35.134741328 +0000 UTC m=+0.031169017 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:12:35 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:12:35 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e027ade9825fd6a71653d2bbe4ed3cb592b31a9c86a4d6a68018e7755496d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:12:35 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e027ade9825fd6a71653d2bbe4ed3cb592b31a9c86a4d6a68018e7755496d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:12:35 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e027ade9825fd6a71653d2bbe4ed3cb592b31a9c86a4d6a68018e7755496d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:12:35 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e027ade9825fd6a71653d2bbe4ed3cb592b31a9c86a4d6a68018e7755496d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:12:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:12:35 np0005540825 podman[259262]: 2025-12-01 10:12:35.266032167 +0000 UTC m=+0.162459856 container init 9a4ac6710d03c925b2e2710c00f996f88f2559139129b651b0e9ff756ebd2baa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_mcnulty, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:12:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:12:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:12:35.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:12:35 np0005540825 podman[259262]: 2025-12-01 10:12:35.275678575 +0000 UTC m=+0.172106264 container start 9a4ac6710d03c925b2e2710c00f996f88f2559139129b651b0e9ff756ebd2baa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_mcnulty, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:12:35 np0005540825 podman[259262]: 2025-12-01 10:12:35.280612938 +0000 UTC m=+0.177040617 container attach 9a4ac6710d03c925b2e2710c00f996f88f2559139129b651b0e9ff756ebd2baa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_mcnulty, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  1 05:12:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/101235 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 05:12:35 np0005540825 blissful_mcnulty[259279]: {
Dec  1 05:12:35 np0005540825 blissful_mcnulty[259279]:    "1": [
Dec  1 05:12:35 np0005540825 blissful_mcnulty[259279]:        {
Dec  1 05:12:35 np0005540825 blissful_mcnulty[259279]:            "devices": [
Dec  1 05:12:35 np0005540825 blissful_mcnulty[259279]:                "/dev/loop3"
Dec  1 05:12:35 np0005540825 blissful_mcnulty[259279]:            ],
Dec  1 05:12:35 np0005540825 blissful_mcnulty[259279]:            "lv_name": "ceph_lv0",
Dec  1 05:12:35 np0005540825 blissful_mcnulty[259279]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:12:35 np0005540825 blissful_mcnulty[259279]:            "lv_size": "21470642176",
Dec  1 05:12:35 np0005540825 blissful_mcnulty[259279]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 05:12:35 np0005540825 blissful_mcnulty[259279]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:12:35 np0005540825 blissful_mcnulty[259279]:            "name": "ceph_lv0",
Dec  1 05:12:35 np0005540825 blissful_mcnulty[259279]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:12:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:35 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7208004280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:35 np0005540825 blissful_mcnulty[259279]:            "tags": {
Dec  1 05:12:35 np0005540825 blissful_mcnulty[259279]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:12:35 np0005540825 blissful_mcnulty[259279]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:12:35 np0005540825 blissful_mcnulty[259279]:                "ceph.cephx_lockbox_secret": "",
Dec  1 05:12:35 np0005540825 blissful_mcnulty[259279]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 05:12:35 np0005540825 blissful_mcnulty[259279]:                "ceph.cluster_name": "ceph",
Dec  1 05:12:35 np0005540825 blissful_mcnulty[259279]:                "ceph.crush_device_class": "",
Dec  1 05:12:35 np0005540825 blissful_mcnulty[259279]:                "ceph.encrypted": "0",
Dec  1 05:12:35 np0005540825 blissful_mcnulty[259279]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 05:12:35 np0005540825 blissful_mcnulty[259279]:                "ceph.osd_id": "1",
Dec  1 05:12:35 np0005540825 blissful_mcnulty[259279]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 05:12:35 np0005540825 blissful_mcnulty[259279]:                "ceph.type": "block",
Dec  1 05:12:35 np0005540825 blissful_mcnulty[259279]:                "ceph.vdo": "0",
Dec  1 05:12:35 np0005540825 blissful_mcnulty[259279]:                "ceph.with_tpm": "0"
Dec  1 05:12:35 np0005540825 blissful_mcnulty[259279]:            },
Dec  1 05:12:35 np0005540825 blissful_mcnulty[259279]:            "type": "block",
Dec  1 05:12:35 np0005540825 blissful_mcnulty[259279]:            "vg_name": "ceph_vg0"
Dec  1 05:12:35 np0005540825 blissful_mcnulty[259279]:        }
Dec  1 05:12:35 np0005540825 blissful_mcnulty[259279]:    ]
Dec  1 05:12:35 np0005540825 blissful_mcnulty[259279]: }
Dec  1 05:12:35 np0005540825 systemd[1]: libpod-9a4ac6710d03c925b2e2710c00f996f88f2559139129b651b0e9ff756ebd2baa.scope: Deactivated successfully.
Dec  1 05:12:35 np0005540825 podman[259262]: 2025-12-01 10:12:35.645943319 +0000 UTC m=+0.542371058 container died 9a4ac6710d03c925b2e2710c00f996f88f2559139129b651b0e9ff756ebd2baa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_mcnulty, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  1 05:12:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:12:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:12:35.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:12:35 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:12:35.671 163291 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '36:10:da', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '4e:5c:35:98:90:37'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 05:12:35 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:12:35.672 163291 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 05:12:35 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:12:35.674 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4d9738cf-2abf-48e2-9303-677669784912, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:12:35 np0005540825 systemd[1]: var-lib-containers-storage-overlay-e2e027ade9825fd6a71653d2bbe4ed3cb592b31a9c86a4d6a68018e7755496d0-merged.mount: Deactivated successfully.
Dec  1 05:12:35 np0005540825 podman[259262]: 2025-12-01 10:12:35.702565537 +0000 UTC m=+0.598993176 container remove 9a4ac6710d03c925b2e2710c00f996f88f2559139129b651b0e9ff756ebd2baa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_mcnulty, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:12:35 np0005540825 systemd[1]: libpod-conmon-9a4ac6710d03c925b2e2710c00f996f88f2559139129b651b0e9ff756ebd2baa.scope: Deactivated successfully.
Dec  1 05:12:36 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v650: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 175 B/s rd, 0 op/s
Dec  1 05:12:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:36 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7208004280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:36 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71f4001c30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:36 np0005540825 podman[259393]: 2025-12-01 10:12:36.499841716 +0000 UTC m=+0.072566936 container create 6f1a2f57e9588bfe19a6c3d696b740c34ab3a1c0c752c0e13228aca827dde31f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:12:36 np0005540825 systemd[1]: Started libpod-conmon-6f1a2f57e9588bfe19a6c3d696b740c34ab3a1c0c752c0e13228aca827dde31f.scope.
Dec  1 05:12:36 np0005540825 podman[259393]: 2025-12-01 10:12:36.468158107 +0000 UTC m=+0.040883387 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:12:36 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:12:36 np0005540825 podman[259393]: 2025-12-01 10:12:36.617423318 +0000 UTC m=+0.190148598 container init 6f1a2f57e9588bfe19a6c3d696b740c34ab3a1c0c752c0e13228aca827dde31f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:12:36 np0005540825 podman[259393]: 2025-12-01 10:12:36.629651146 +0000 UTC m=+0.202376366 container start 6f1a2f57e9588bfe19a6c3d696b740c34ab3a1c0c752c0e13228aca827dde31f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_proskuriakova, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:12:36 np0005540825 podman[259393]: 2025-12-01 10:12:36.634163077 +0000 UTC m=+0.206888307 container attach 6f1a2f57e9588bfe19a6c3d696b740c34ab3a1c0c752c0e13228aca827dde31f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:12:36 np0005540825 stoic_proskuriakova[259409]: 167 167
Dec  1 05:12:36 np0005540825 systemd[1]: libpod-6f1a2f57e9588bfe19a6c3d696b740c34ab3a1c0c752c0e13228aca827dde31f.scope: Deactivated successfully.
Dec  1 05:12:36 np0005540825 podman[259393]: 2025-12-01 10:12:36.639719026 +0000 UTC m=+0.212444256 container died 6f1a2f57e9588bfe19a6c3d696b740c34ab3a1c0c752c0e13228aca827dde31f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_proskuriakova, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  1 05:12:36 np0005540825 systemd[1]: var-lib-containers-storage-overlay-dae2c5f8299d5e27c140b451edca1bf4154ec8f507996393eacaf286e22846df-merged.mount: Deactivated successfully.
Dec  1 05:12:36 np0005540825 podman[259393]: 2025-12-01 10:12:36.694401161 +0000 UTC m=+0.267126391 container remove 6f1a2f57e9588bfe19a6c3d696b740c34ab3a1c0c752c0e13228aca827dde31f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_proskuriakova, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:12:36 np0005540825 systemd[1]: libpod-conmon-6f1a2f57e9588bfe19a6c3d696b740c34ab3a1c0c752c0e13228aca827dde31f.scope: Deactivated successfully.
Dec  1 05:12:36 np0005540825 podman[259435]: 2025-12-01 10:12:36.953903197 +0000 UTC m=+0.078503186 container create 77b821390f30f3b9631f75e2a66e37930b5199c47cac8a71ab19937ec8e50edd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:12:37 np0005540825 systemd[1]: Started libpod-conmon-77b821390f30f3b9631f75e2a66e37930b5199c47cac8a71ab19937ec8e50edd.scope.
Dec  1 05:12:37 np0005540825 podman[259435]: 2025-12-01 10:12:36.920022679 +0000 UTC m=+0.044622708 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:12:37 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:12:37 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78acfb7a93c56405cc05d29e651f5ac13c617e156f8ea083fe5022eb4244ec8c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:12:37 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78acfb7a93c56405cc05d29e651f5ac13c617e156f8ea083fe5022eb4244ec8c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:12:37 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78acfb7a93c56405cc05d29e651f5ac13c617e156f8ea083fe5022eb4244ec8c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:12:37 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78acfb7a93c56405cc05d29e651f5ac13c617e156f8ea083fe5022eb4244ec8c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:12:37 np0005540825 podman[259435]: 2025-12-01 10:12:37.076189914 +0000 UTC m=+0.200789943 container init 77b821390f30f3b9631f75e2a66e37930b5199c47cac8a71ab19937ec8e50edd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:12:37 np0005540825 podman[259435]: 2025-12-01 10:12:37.088653058 +0000 UTC m=+0.213253017 container start 77b821390f30f3b9631f75e2a66e37930b5199c47cac8a71ab19937ec8e50edd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_turing, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec  1 05:12:37 np0005540825 podman[259435]: 2025-12-01 10:12:37.092127301 +0000 UTC m=+0.216727290 container attach 77b821390f30f3b9631f75e2a66e37930b5199c47cac8a71ab19937ec8e50edd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_turing, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:12:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:12:37.160Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:12:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:12:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:12:37.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:12:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:37 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71fc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:12:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:12:37.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:12:37 np0005540825 lvm[259528]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 05:12:37 np0005540825 lvm[259528]: VG ceph_vg0 finished
Dec  1 05:12:37 np0005540825 keen_turing[259452]: {}
Dec  1 05:12:38 np0005540825 systemd[1]: libpod-77b821390f30f3b9631f75e2a66e37930b5199c47cac8a71ab19937ec8e50edd.scope: Deactivated successfully.
Dec  1 05:12:38 np0005540825 systemd[1]: libpod-77b821390f30f3b9631f75e2a66e37930b5199c47cac8a71ab19937ec8e50edd.scope: Consumed 1.586s CPU time.
Dec  1 05:12:38 np0005540825 podman[259435]: 2025-12-01 10:12:38.005491401 +0000 UTC m=+1.130091430 container died 77b821390f30f3b9631f75e2a66e37930b5199c47cac8a71ab19937ec8e50edd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:12:38 np0005540825 systemd[1]: var-lib-containers-storage-overlay-78acfb7a93c56405cc05d29e651f5ac13c617e156f8ea083fe5022eb4244ec8c-merged.mount: Deactivated successfully.
Dec  1 05:12:38 np0005540825 podman[259435]: 2025-12-01 10:12:38.061867032 +0000 UTC m=+1.186467021 container remove 77b821390f30f3b9631f75e2a66e37930b5199c47cac8a71ab19937ec8e50edd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  1 05:12:38 np0005540825 systemd[1]: libpod-conmon-77b821390f30f3b9631f75e2a66e37930b5199c47cac8a71ab19937ec8e50edd.scope: Deactivated successfully.
Dec  1 05:12:38 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v651: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 175 B/s rd, 0 op/s
Dec  1 05:12:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 05:12:38 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:12:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 05:12:38 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:12:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:38 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f723000a780 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:38 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7208004280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:12:38.976Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:12:39 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:12:39 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:12:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:12:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:12:39.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_10:12:39
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', '.mgr', 'default.rgw.control', 'backups', 'volumes', 'default.rgw.meta', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', '.nfs']
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 05:12:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:12:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:12:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:39 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71f4001c30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:12:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:12:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:12:39.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:12:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:12:40 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v652: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 176 B/s rd, 0 op/s
Dec  1 05:12:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:12:40 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:40 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71fc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:40 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:40 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f723000a7a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:40 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:40 : epoch 692d696f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:12:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:12:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:12:41.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:12:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:12:41] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Dec  1 05:12:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:12:41] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Dec  1 05:12:41 np0005540825 ceph-mgr[74709]: [devicehealth INFO root] Check health
Dec  1 05:12:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:41 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7208004280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:12:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:12:41.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:12:42 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v653: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 968 B/s rd, 440 B/s wr, 1 op/s
Dec  1 05:12:42 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:42 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71f4003350 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:42 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:42 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71fc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:12:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:12:43.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:12:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:43 : epoch 692d696f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:12:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:43 : epoch 692d696f : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:12:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:12:43.608Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:12:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:12:43.608Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:12:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:43 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f723000a7c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:12:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:12:43.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:12:44 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v654: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Dec  1 05:12:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:44 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7208004280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:44 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71f4003350 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:45 np0005540825 podman[259601]: 2025-12-01 10:12:45.231574647 +0000 UTC m=+0.083413336 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:12:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:12:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:12:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:12:45.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:12:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:45 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71fc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:12:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:12:45.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:12:46 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v655: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Dec  1 05:12:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:46 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f723000a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:46 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f723000a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:46 : epoch 692d696f : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  1 05:12:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:12:47.161Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:12:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:12:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:12:47.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:12:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:47 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71f4003350 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:12:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:12:47.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:12:48 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v656: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Dec  1 05:12:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:48 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71f4003350 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:48 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7200001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:12:48.977Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:12:49 np0005540825 nova_compute[256151]: 2025-12-01 10:12:49.196 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:12:49 np0005540825 nova_compute[256151]: 2025-12-01 10:12:49.197 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:12:49 np0005540825 nova_compute[256151]: 2025-12-01 10:12:49.197 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 05:12:49 np0005540825 nova_compute[256151]: 2025-12-01 10:12:49.198 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 05:12:49 np0005540825 nova_compute[256151]: 2025-12-01 10:12:49.214 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 05:12:49 np0005540825 nova_compute[256151]: 2025-12-01 10:12:49.215 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:12:49 np0005540825 nova_compute[256151]: 2025-12-01 10:12:49.215 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:12:49 np0005540825 nova_compute[256151]: 2025-12-01 10:12:49.216 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:12:49 np0005540825 nova_compute[256151]: 2025-12-01 10:12:49.216 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:12:49 np0005540825 nova_compute[256151]: 2025-12-01 10:12:49.217 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:12:49 np0005540825 nova_compute[256151]: 2025-12-01 10:12:49.217 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:12:49 np0005540825 nova_compute[256151]: 2025-12-01 10:12:49.218 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 05:12:49 np0005540825 nova_compute[256151]: 2025-12-01 10:12:49.218 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:12:49 np0005540825 nova_compute[256151]: 2025-12-01 10:12:49.244 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:12:49 np0005540825 nova_compute[256151]: 2025-12-01 10:12:49.244 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:12:49 np0005540825 nova_compute[256151]: 2025-12-01 10:12:49.245 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:12:49 np0005540825 nova_compute[256151]: 2025-12-01 10:12:49.245 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 05:12:49 np0005540825 nova_compute[256151]: 2025-12-01 10:12:49.246 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:12:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:12:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:12:49.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:12:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:49 : epoch 692d696f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:12:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:49 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f723000a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:12:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:12:49.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:12:49 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:12:49 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1068358381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:12:49 np0005540825 nova_compute[256151]: 2025-12-01 10:12:49.763 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:12:50 np0005540825 nova_compute[256151]: 2025-12-01 10:12:50.016 256155 WARNING nova.virt.libvirt.driver [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 05:12:50 np0005540825 nova_compute[256151]: 2025-12-01 10:12:50.018 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4900MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 05:12:50 np0005540825 nova_compute[256151]: 2025-12-01 10:12:50.019 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:12:50 np0005540825 nova_compute[256151]: 2025-12-01 10:12:50.020 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:12:50 np0005540825 nova_compute[256151]: 2025-12-01 10:12:50.093 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 05:12:50 np0005540825 nova_compute[256151]: 2025-12-01 10:12:50.094 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 05:12:50 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v657: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Dec  1 05:12:50 np0005540825 nova_compute[256151]: 2025-12-01 10:12:50.124 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:12:50 np0005540825 podman[259649]: 2025-12-01 10:12:50.247789305 +0000 UTC m=+0.103688430 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  1 05:12:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:12:50 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:50 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71f4003350 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:50 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:50 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71f4003350 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:12:50 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2824092753' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:12:50 np0005540825 nova_compute[256151]: 2025-12-01 10:12:50.625 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:12:50 np0005540825 nova_compute[256151]: 2025-12-01 10:12:50.632 256155 DEBUG nova.compute.provider_tree [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed in ProviderTree for provider: 5efe20fe-1981-4bd9-8786-d9fddc89a5ae update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 05:12:50 np0005540825 nova_compute[256151]: 2025-12-01 10:12:50.659 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 05:12:50 np0005540825 nova_compute[256151]: 2025-12-01 10:12:50.662 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 05:12:50 np0005540825 nova_compute[256151]: 2025-12-01 10:12:50.663 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:12:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:12:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:12:51.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:12:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:12:51] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Dec  1 05:12:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:12:51] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Dec  1 05:12:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:51 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7200001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:12:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:12:51.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:12:52 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v658: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.5 KiB/s wr, 4 op/s
Dec  1 05:12:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:52 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f723000a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/101252 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  1 05:12:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:52 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71f4003350 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:52 : epoch 692d696f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:12:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:52 : epoch 692d696f : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:12:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:12:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:12:53.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:12:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:12:53.609Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:12:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:53 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71fc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:12:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:12:53.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:12:54 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v659: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Dec  1 05:12:54 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Dec  1 05:12:54 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:12:54.177707) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  1 05:12:54 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Dec  1 05:12:54 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764583974177815, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2128, "num_deletes": 251, "total_data_size": 4312363, "memory_usage": 4379648, "flush_reason": "Manual Compaction"}
Dec  1 05:12:54 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Dec  1 05:12:54 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764583974208550, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 4178025, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20096, "largest_seqno": 22223, "table_properties": {"data_size": 4168505, "index_size": 6014, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19487, "raw_average_key_size": 20, "raw_value_size": 4149481, "raw_average_value_size": 4286, "num_data_blocks": 265, "num_entries": 968, "num_filter_entries": 968, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764583754, "oldest_key_time": 1764583754, "file_creation_time": 1764583974, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Dec  1 05:12:54 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 30884 microseconds, and 15968 cpu microseconds.
Dec  1 05:12:54 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 05:12:54 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:12:54.208619) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 4178025 bytes OK
Dec  1 05:12:54 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:12:54.208650) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Dec  1 05:12:54 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:12:54.210576) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Dec  1 05:12:54 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:12:54.210598) EVENT_LOG_v1 {"time_micros": 1764583974210592, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  1 05:12:54 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:12:54.210622) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  1 05:12:54 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 4303768, prev total WAL file size 4303768, number of live WAL files 2.
Dec  1 05:12:54 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:12:54 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:12:54.212391) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Dec  1 05:12:54 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  1 05:12:54 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(4080KB)], [44(12MB)]
Dec  1 05:12:54 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764583974212441, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 17788977, "oldest_snapshot_seqno": -1}
Dec  1 05:12:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:54 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7200003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:54 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 5482 keys, 15619969 bytes, temperature: kUnknown
Dec  1 05:12:54 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764583974319614, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 15619969, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15580565, "index_size": 24574, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13765, "raw_key_size": 138096, "raw_average_key_size": 25, "raw_value_size": 15478465, "raw_average_value_size": 2823, "num_data_blocks": 1016, "num_entries": 5482, "num_filter_entries": 5482, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764582410, "oldest_key_time": 0, "file_creation_time": 1764583974, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Dec  1 05:12:54 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 05:12:54 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:12:54.319938) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 15619969 bytes
Dec  1 05:12:54 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:12:54.321632) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 165.8 rd, 145.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.0, 13.0 +0.0 blob) out(14.9 +0.0 blob), read-write-amplify(8.0) write-amplify(3.7) OK, records in: 5998, records dropped: 516 output_compression: NoCompression
Dec  1 05:12:54 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:12:54.321664) EVENT_LOG_v1 {"time_micros": 1764583974321649, "job": 22, "event": "compaction_finished", "compaction_time_micros": 107268, "compaction_time_cpu_micros": 60401, "output_level": 6, "num_output_files": 1, "total_output_size": 15619969, "num_input_records": 5998, "num_output_records": 5482, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  1 05:12:54 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:12:54 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764583974323143, "job": 22, "event": "table_file_deletion", "file_number": 46}
Dec  1 05:12:54 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:12:54 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764583974327563, "job": 22, "event": "table_file_deletion", "file_number": 44}
Dec  1 05:12:54 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:12:54.212238) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:12:54 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:12:54.327638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:12:54 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:12:54.327645) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:12:54 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:12:54.327647) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:12:54 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:12:54.327650) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:12:54 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:12:54.327655) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:12:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:54 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f723000a800 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:12:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:12:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:12:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:12:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:12:55.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:12:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:55 : epoch 692d696f : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  1 05:12:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:55 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71f4003350 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:12:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:12:55.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:12:56 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v660: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Dec  1 05:12:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:56 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71fc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:56 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7200003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:12:57.162Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:12:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:12:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:12:57.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:12:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/101257 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  1 05:12:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:57 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f723000a820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:12:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:12:57.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:12:58 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v661: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 852 B/s wr, 3 op/s
Dec  1 05:12:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:58 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71f4003350 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:58 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71fc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:12:58.978Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:12:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:12:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:12:59.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:12:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:12:59 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7200003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:12:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:12:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:12:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:12:59.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:13:00 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v662: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 852 B/s wr, 3 op/s
Dec  1 05:13:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:13:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:00 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f723000a840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:00 np0005540825 podman[259701]: 2025-12-01 10:13:00.310265054 +0000 UTC m=+0.170504921 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2)
Dec  1 05:13:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:00 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71f4003350 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:13:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:13:01.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:13:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:13:01] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Dec  1 05:13:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:13:01] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Dec  1 05:13:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:01 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71fc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:13:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:13:01.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:13:02 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v663: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 852 B/s wr, 3 op/s
Dec  1 05:13:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:02 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72000035d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:02 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72140030a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:13:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:13:03.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:13:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:13:03.610Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:13:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:03 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71f4003350 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:13:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:13:03.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:13:04 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v664: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Dec  1 05:13:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:04 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71fc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:04 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7200003770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:13:04.568 163291 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:13:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:13:04.568 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:13:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:13:04.568 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:13:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:13:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.002000054s ======
Dec  1 05:13:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:13:05.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec  1 05:13:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/101305 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 05:13:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:05 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72140030a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:13:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:13:05.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:13:06 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v665: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 255 B/s wr, 1 op/s
Dec  1 05:13:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:06 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71f4003350 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:06 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71fc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:07 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  1 05:13:07 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3483028568' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  1 05:13:07 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  1 05:13:07 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3483028568' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  1 05:13:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:13:07.163Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:13:07 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:07 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:13:07 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:13:07.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:13:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:07 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7200004960 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:07 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:07 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:13:07 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:13:07.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:13:08 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v666: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  1 05:13:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:08 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7214001530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:08 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71f4003350 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:13:08.979Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:13:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=cleanup t=2025-12-01T10:13:09.238867765Z level=info msg="Completed cleanup jobs" duration=31.268398ms
Dec  1 05:13:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:13:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:13:09.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:13:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=plugins.update.checker t=2025-12-01T10:13:09.324986373Z level=info msg="Update check succeeded" duration=52.919769ms
Dec  1 05:13:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=grafana.update.checker t=2025-12-01T10:13:09.325874397Z level=info msg="Update check succeeded" duration=46.824225ms
Dec  1 05:13:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:13:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:13:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:13:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:13:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:13:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:13:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:13:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:13:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:09 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71fc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:13:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:13:09.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:13:10 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v667: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  1 05:13:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:13:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:10 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7200004960 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:10 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7214001530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:13:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:13:11.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:13:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:13:11] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Dec  1 05:13:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:13:11] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Dec  1 05:13:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:11 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71f4003350 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:13:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:13:11.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:13:12 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v668: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  1 05:13:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:12 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71fc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:12 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7200004960 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:13:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:13:13.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:13:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:13:13.611Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:13:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:13 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72140027b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:13:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:13:13.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:13:14 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v669: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  1 05:13:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:14 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71f4003350 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:14 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71fc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:14 : epoch 692d696f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:13:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:13:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:13:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:13:15.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:13:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:15 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7200004960 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:13:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:13:15.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:13:16 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v670: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec  1 05:13:16 np0005540825 podman[259770]: 2025-12-01 10:13:16.233366346 +0000 UTC m=+0.082473662 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  1 05:13:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:16 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7200004960 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:16 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71f4003350 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:13:17.164Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:13:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:13:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:13:17.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:13:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:17 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71fc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:13:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:13:17.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:13:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:17 : epoch 692d696f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:13:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:17 : epoch 692d696f : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:13:18 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v671: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Dec  1 05:13:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:18 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71fc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:18 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71fc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:13:18.980Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:13:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:13:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:13:19.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:13:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:19 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7208002670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:13:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:13:19.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:13:20 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v672: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Dec  1 05:13:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:13:20 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Dec  1 05:13:20 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:13:20.280489) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  1 05:13:20 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Dec  1 05:13:20 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584000280537, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 454, "num_deletes": 252, "total_data_size": 474249, "memory_usage": 482000, "flush_reason": "Manual Compaction"}
Dec  1 05:13:20 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Dec  1 05:13:20 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584000300215, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 351224, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22224, "largest_seqno": 22677, "table_properties": {"data_size": 348788, "index_size": 536, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6335, "raw_average_key_size": 19, "raw_value_size": 343884, "raw_average_value_size": 1058, "num_data_blocks": 24, "num_entries": 325, "num_filter_entries": 325, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764583975, "oldest_key_time": 1764583975, "file_creation_time": 1764584000, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Dec  1 05:13:20 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 19794 microseconds, and 2677 cpu microseconds.
Dec  1 05:13:20 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 05:13:20 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:13:20.300280) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 351224 bytes OK
Dec  1 05:13:20 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:13:20.300379) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Dec  1 05:13:20 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:13:20.302741) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Dec  1 05:13:20 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:13:20.302773) EVENT_LOG_v1 {"time_micros": 1764584000302765, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  1 05:13:20 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:13:20.302795) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  1 05:13:20 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 471551, prev total WAL file size 471551, number of live WAL files 2.
Dec  1 05:13:20 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:13:20 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:13:20.303578) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353031' seq:72057594037927935, type:22 .. '6D67727374617400373534' seq:0, type:0; will stop at (end)
Dec  1 05:13:20 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  1 05:13:20 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(342KB)], [47(14MB)]
Dec  1 05:13:20 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584000303623, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 15971193, "oldest_snapshot_seqno": -1}
Dec  1 05:13:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:20 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7200004960 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:20 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71f4003350 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:20 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 5302 keys, 11953314 bytes, temperature: kUnknown
Dec  1 05:13:20 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584000409107, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 11953314, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11919447, "index_size": 19485, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13317, "raw_key_size": 134768, "raw_average_key_size": 25, "raw_value_size": 11824832, "raw_average_value_size": 2230, "num_data_blocks": 793, "num_entries": 5302, "num_filter_entries": 5302, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764582410, "oldest_key_time": 0, "file_creation_time": 1764584000, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Dec  1 05:13:20 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 05:13:20 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:13:20.409715) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 11953314 bytes
Dec  1 05:13:20 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:13:20.411909) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 150.9 rd, 112.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 14.9 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(79.5) write-amplify(34.0) OK, records in: 5807, records dropped: 505 output_compression: NoCompression
Dec  1 05:13:20 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:13:20.411941) EVENT_LOG_v1 {"time_micros": 1764584000411927, "job": 24, "event": "compaction_finished", "compaction_time_micros": 105873, "compaction_time_cpu_micros": 49523, "output_level": 6, "num_output_files": 1, "total_output_size": 11953314, "num_input_records": 5807, "num_output_records": 5302, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  1 05:13:20 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:13:20 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584000412199, "job": 24, "event": "table_file_deletion", "file_number": 49}
Dec  1 05:13:20 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:13:20 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584000418077, "job": 24, "event": "table_file_deletion", "file_number": 47}
Dec  1 05:13:20 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:13:20.303454) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:13:20 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:13:20.418224) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:13:20 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:13:20.418232) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:13:20 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:13:20.418235) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:13:20 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:13:20.418238) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:13:20 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:13:20.418241) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:13:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:20 : epoch 692d696f : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  1 05:13:21 np0005540825 podman[259794]: 2025-12-01 10:13:21.217267876 +0000 UTC m=+0.084477505 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  1 05:13:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:13:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:13:21.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:13:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:13:21] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Dec  1 05:13:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:13:21] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Dec  1 05:13:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:21 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71fc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:13:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:13:21.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:13:22 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v673: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec  1 05:13:22 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:22 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7208002670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:22 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:22 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7200004960 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:13:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:13:23.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:13:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:13:23.612Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:13:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:13:23.612Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:13:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:23 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71f4003350 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:13:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:13:23.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:13:24 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v674: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec  1 05:13:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:24 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71fc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:24 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72080027f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:13:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:13:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:13:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:13:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:13:25.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:13:25 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:25 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7200004960 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:13:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:13:25.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:13:26 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v675: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  1 05:13:26 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:26 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71f4003350 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:26 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:26 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71fc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:13:27.165Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:13:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:13:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:13:27.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:13:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/101327 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  1 05:13:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:27 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72080027f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:13:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:13:27.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:13:28 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v676: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec  1 05:13:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:28 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7200004960 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:28 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71f4003350 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:13:28.981Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:13:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:13:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:13:29.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:13:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:29 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71fc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:13:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:13:29.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:13:30 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v677: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec  1 05:13:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:13:30 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:30 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71fc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:30 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:30 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7200004960 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:31 np0005540825 podman[259848]: 2025-12-01 10:13:31.285826811 +0000 UTC m=+0.136625123 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:13:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:13:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:13:31.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:13:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:13:31] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Dec  1 05:13:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:13:31] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Dec  1 05:13:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:31 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71f4003350 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:13:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:13:31.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:13:32 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v678: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec  1 05:13:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:32 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72080027f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:32 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f71fc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:13:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:13:33.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:13:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:13:33.613Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:13:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:33 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7200004960 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:13:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:13:33.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:13:34 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v679: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec  1 05:13:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:34 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7200004960 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[256612]: 01/12/2025 10:13:34 : epoch 692d696f : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7200004960 fd 39 proxy ignored for local
Dec  1 05:13:34 np0005540825 kernel: ganesha.nfsd[258783]: segfault at 50 ip 00007f72e06d432e sp 00007f7297ffe210 error 4 in libntirpc.so.5.8[7f72e06b9000+2c000] likely on CPU 5 (core 0, socket 5)
Dec  1 05:13:34 np0005540825 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec  1 05:13:34 np0005540825 systemd[1]: Started Process Core Dump (PID 259880/UID 0).
Dec  1 05:13:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:13:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:13:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:13:35.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:13:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:13:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:13:35.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:13:35 np0005540825 systemd-coredump[259881]: Process 256616 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 63:#012#0  0x00007f72e06d432e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Dec  1 05:13:35 np0005540825 systemd[1]: systemd-coredump@7-259880-0.service: Deactivated successfully.
Dec  1 05:13:35 np0005540825 systemd[1]: systemd-coredump@7-259880-0.service: Consumed 1.396s CPU time.
Dec  1 05:13:36 np0005540825 podman[259888]: 2025-12-01 10:13:36.032090683 +0000 UTC m=+0.041798791 container died bc00ab93db54e77d935cbeba155ec3fdde1da6c800ad51dadad6b8739f7d4cb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec  1 05:13:36 np0005540825 systemd[1]: var-lib-containers-storage-overlay-8242ab99e9451555a04252722ca5ea65c6e69733f646a9187a904c2da83e7cc9-merged.mount: Deactivated successfully.
Dec  1 05:13:36 np0005540825 podman[259888]: 2025-12-01 10:13:36.099939552 +0000 UTC m=+0.109647620 container remove bc00ab93db54e77d935cbeba155ec3fdde1da6c800ad51dadad6b8739f7d4cb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:13:36 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Main process exited, code=exited, status=139/n/a
Dec  1 05:13:36 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v680: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec  1 05:13:36 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Failed with result 'exit-code'.
Dec  1 05:13:36 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Consumed 2.412s CPU time.
Dec  1 05:13:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:13:37.167Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:13:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:13:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:13:37.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:13:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:13:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:13:37.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:13:38 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v681: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:13:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:13:38.982Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:13:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:13:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:13:39.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:13:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec  1 05:13:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_10:13:39
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['volumes', '.rgw.root', '.nfs', 'vms', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log']
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 05:13:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:13:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:13:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:13:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:13:39.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:13:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:13:40 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v682: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:13:40 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  1 05:13:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:13:40 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/101340 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 05:13:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  1 05:13:40 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:13:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  1 05:13:40 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:13:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  1 05:13:40 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:13:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  1 05:13:40 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:13:41 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:13:41 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:13:41 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:13:41 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:13:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:13:41] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec  1 05:13:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:13:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:13:41.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:13:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:13:41] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec  1 05:13:41 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec  1 05:13:41 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  1 05:13:41 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec  1 05:13:41 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  1 05:13:41 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:13:41 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:13:41 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 05:13:41 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:13:41 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 05:13:41 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v683: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 268 B/s rd, 0 op/s
Dec  1 05:13:41 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:13:41 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 05:13:41 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:13:41 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 05:13:41 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 05:13:41 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 05:13:41 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:13:41 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:13:41 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:13:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:13:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:13:41.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:13:42 np0005540825 podman[260108]: 2025-12-01 10:13:42.227850116 +0000 UTC m=+0.093437605 container create a5f6e23f402265309c779f0fbd0918c04108956629dbaea8c37d531d0890c4c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  1 05:13:42 np0005540825 podman[260108]: 2025-12-01 10:13:42.158909419 +0000 UTC m=+0.024496938 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:13:42 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  1 05:13:42 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  1 05:13:42 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:13:42 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:13:42 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:13:42 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:13:42 np0005540825 systemd[1]: Started libpod-conmon-a5f6e23f402265309c779f0fbd0918c04108956629dbaea8c37d531d0890c4c7.scope.
Dec  1 05:13:42 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:13:42 np0005540825 podman[260108]: 2025-12-01 10:13:42.351953223 +0000 UTC m=+0.217540802 container init a5f6e23f402265309c779f0fbd0918c04108956629dbaea8c37d531d0890c4c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_curran, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  1 05:13:42 np0005540825 podman[260108]: 2025-12-01 10:13:42.361953741 +0000 UTC m=+0.227541220 container start a5f6e23f402265309c779f0fbd0918c04108956629dbaea8c37d531d0890c4c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:13:42 np0005540825 relaxed_curran[260125]: 167 167
Dec  1 05:13:42 np0005540825 systemd[1]: libpod-a5f6e23f402265309c779f0fbd0918c04108956629dbaea8c37d531d0890c4c7.scope: Deactivated successfully.
Dec  1 05:13:42 np0005540825 podman[260108]: 2025-12-01 10:13:42.403752121 +0000 UTC m=+0.269339690 container attach a5f6e23f402265309c779f0fbd0918c04108956629dbaea8c37d531d0890c4c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_curran, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True)
Dec  1 05:13:42 np0005540825 podman[260108]: 2025-12-01 10:13:42.405284582 +0000 UTC m=+0.270872101 container died a5f6e23f402265309c779f0fbd0918c04108956629dbaea8c37d531d0890c4c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:13:42 np0005540825 systemd[1]: var-lib-containers-storage-overlay-570f38ca5cfe7d1b916d229b13219c722db33d86605440b8dd27eaea570140a3-merged.mount: Deactivated successfully.
Dec  1 05:13:42 np0005540825 podman[260108]: 2025-12-01 10:13:42.482436219 +0000 UTC m=+0.348023718 container remove a5f6e23f402265309c779f0fbd0918c04108956629dbaea8c37d531d0890c4c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_curran, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  1 05:13:42 np0005540825 systemd[1]: libpod-conmon-a5f6e23f402265309c779f0fbd0918c04108956629dbaea8c37d531d0890c4c7.scope: Deactivated successfully.
Dec  1 05:13:42 np0005540825 podman[260151]: 2025-12-01 10:13:42.722464782 +0000 UTC m=+0.071876107 container create 4bbbb8a2e4b7719587e355782ddc1944be5059ad630ab2c0e97767b350779135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_davinci, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:13:42 np0005540825 systemd[1]: Started libpod-conmon-4bbbb8a2e4b7719587e355782ddc1944be5059ad630ab2c0e97767b350779135.scope.
Dec  1 05:13:42 np0005540825 podman[260151]: 2025-12-01 10:13:42.692199151 +0000 UTC m=+0.041610516 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:13:42 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:13:42 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e3a8945bd9a2a060f14d7e00a78617af116dbc76021ae90e81b88e62c5920de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:13:42 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e3a8945bd9a2a060f14d7e00a78617af116dbc76021ae90e81b88e62c5920de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:13:42 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e3a8945bd9a2a060f14d7e00a78617af116dbc76021ae90e81b88e62c5920de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:13:42 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e3a8945bd9a2a060f14d7e00a78617af116dbc76021ae90e81b88e62c5920de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:13:42 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e3a8945bd9a2a060f14d7e00a78617af116dbc76021ae90e81b88e62c5920de/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:13:42 np0005540825 podman[260151]: 2025-12-01 10:13:42.849689802 +0000 UTC m=+0.199101167 container init 4bbbb8a2e4b7719587e355782ddc1944be5059ad630ab2c0e97767b350779135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_davinci, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  1 05:13:42 np0005540825 podman[260151]: 2025-12-01 10:13:42.866466322 +0000 UTC m=+0.215877647 container start 4bbbb8a2e4b7719587e355782ddc1944be5059ad630ab2c0e97767b350779135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_davinci, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:13:42 np0005540825 podman[260151]: 2025-12-01 10:13:42.872268458 +0000 UTC m=+0.221679803 container attach 4bbbb8a2e4b7719587e355782ddc1944be5059ad630ab2c0e97767b350779135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_davinci, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  1 05:13:43 np0005540825 compassionate_davinci[260180]: --> passed data devices: 0 physical, 1 LVM
Dec  1 05:13:43 np0005540825 compassionate_davinci[260180]: --> All data devices are unavailable
Dec  1 05:13:43 np0005540825 systemd[1]: libpod-4bbbb8a2e4b7719587e355782ddc1944be5059ad630ab2c0e97767b350779135.scope: Deactivated successfully.
Dec  1 05:13:43 np0005540825 podman[260151]: 2025-12-01 10:13:43.308922851 +0000 UTC m=+0.658334176 container died 4bbbb8a2e4b7719587e355782ddc1944be5059ad630ab2c0e97767b350779135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  1 05:13:43 np0005540825 systemd[1]: var-lib-containers-storage-overlay-0e3a8945bd9a2a060f14d7e00a78617af116dbc76021ae90e81b88e62c5920de-merged.mount: Deactivated successfully.
Dec  1 05:13:43 np0005540825 podman[260151]: 2025-12-01 10:13:43.369780732 +0000 UTC m=+0.719192047 container remove 4bbbb8a2e4b7719587e355782ddc1944be5059ad630ab2c0e97767b350779135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_davinci, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  1 05:13:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:13:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:13:43.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:13:43 np0005540825 systemd[1]: libpod-conmon-4bbbb8a2e4b7719587e355782ddc1944be5059ad630ab2c0e97767b350779135.scope: Deactivated successfully.
Dec  1 05:13:43 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v684: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 268 B/s rd, 0 op/s
Dec  1 05:13:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:13:43.614Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:13:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:13:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:13:43.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:13:44 np0005540825 podman[260312]: 2025-12-01 10:13:44.158511163 +0000 UTC m=+0.068819016 container create e545c91152b2c84deada3f03930b0ce3e2b463578f9cf65ba0ee5a80b64f177b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  1 05:13:44 np0005540825 systemd[1]: Started libpod-conmon-e545c91152b2c84deada3f03930b0ce3e2b463578f9cf65ba0ee5a80b64f177b.scope.
Dec  1 05:13:44 np0005540825 podman[260312]: 2025-12-01 10:13:44.130976885 +0000 UTC m=+0.041284798 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:13:44 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:13:44 np0005540825 podman[260312]: 2025-12-01 10:13:44.266059705 +0000 UTC m=+0.176367608 container init e545c91152b2c84deada3f03930b0ce3e2b463578f9cf65ba0ee5a80b64f177b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec  1 05:13:44 np0005540825 podman[260312]: 2025-12-01 10:13:44.277594594 +0000 UTC m=+0.187902437 container start e545c91152b2c84deada3f03930b0ce3e2b463578f9cf65ba0ee5a80b64f177b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_faraday, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:13:44 np0005540825 podman[260312]: 2025-12-01 10:13:44.281566001 +0000 UTC m=+0.191873914 container attach e545c91152b2c84deada3f03930b0ce3e2b463578f9cf65ba0ee5a80b64f177b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_faraday, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:13:44 np0005540825 nervous_faraday[260328]: 167 167
Dec  1 05:13:44 np0005540825 systemd[1]: libpod-e545c91152b2c84deada3f03930b0ce3e2b463578f9cf65ba0ee5a80b64f177b.scope: Deactivated successfully.
Dec  1 05:13:44 np0005540825 podman[260312]: 2025-12-01 10:13:44.284941411 +0000 UTC m=+0.195249304 container died e545c91152b2c84deada3f03930b0ce3e2b463578f9cf65ba0ee5a80b64f177b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_faraday, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True)
Dec  1 05:13:44 np0005540825 systemd[1]: var-lib-containers-storage-overlay-88a4fc2b031a5ce47a0cb54dbb8750c0583c76d68d2c1abf06af6e45735a3bfe-merged.mount: Deactivated successfully.
Dec  1 05:13:44 np0005540825 podman[260312]: 2025-12-01 10:13:44.337754037 +0000 UTC m=+0.248061910 container remove e545c91152b2c84deada3f03930b0ce3e2b463578f9cf65ba0ee5a80b64f177b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_faraday, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec  1 05:13:44 np0005540825 systemd[1]: libpod-conmon-e545c91152b2c84deada3f03930b0ce3e2b463578f9cf65ba0ee5a80b64f177b.scope: Deactivated successfully.
Dec  1 05:13:44 np0005540825 podman[260352]: 2025-12-01 10:13:44.606088039 +0000 UTC m=+0.102752115 container create 2f605c6ea12ab0c1dd38591f435a85bfad4b284503f62d9ea3b78dc4b78a4574 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_kepler, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:13:44 np0005540825 podman[260352]: 2025-12-01 10:13:44.5796333 +0000 UTC m=+0.076297386 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:13:44 np0005540825 systemd[1]: Started libpod-conmon-2f605c6ea12ab0c1dd38591f435a85bfad4b284503f62d9ea3b78dc4b78a4574.scope.
Dec  1 05:13:44 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:13:44 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fbd468aecdc4dc1c0fd19988299c81ad130bd8c0139c6cb2b489465ae2fde2d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:13:44 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fbd468aecdc4dc1c0fd19988299c81ad130bd8c0139c6cb2b489465ae2fde2d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:13:44 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fbd468aecdc4dc1c0fd19988299c81ad130bd8c0139c6cb2b489465ae2fde2d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:13:44 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fbd468aecdc4dc1c0fd19988299c81ad130bd8c0139c6cb2b489465ae2fde2d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:13:44 np0005540825 podman[260352]: 2025-12-01 10:13:44.731627333 +0000 UTC m=+0.228291419 container init 2f605c6ea12ab0c1dd38591f435a85bfad4b284503f62d9ea3b78dc4b78a4574 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_kepler, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  1 05:13:44 np0005540825 podman[260352]: 2025-12-01 10:13:44.747074887 +0000 UTC m=+0.243738943 container start 2f605c6ea12ab0c1dd38591f435a85bfad4b284503f62d9ea3b78dc4b78a4574 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_kepler, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  1 05:13:44 np0005540825 podman[260352]: 2025-12-01 10:13:44.75129005 +0000 UTC m=+0.247954106 container attach 2f605c6ea12ab0c1dd38591f435a85bfad4b284503f62d9ea3b78dc4b78a4574 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_kepler, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  1 05:13:45 np0005540825 flamboyant_kepler[260368]: {
Dec  1 05:13:45 np0005540825 flamboyant_kepler[260368]:    "1": [
Dec  1 05:13:45 np0005540825 flamboyant_kepler[260368]:        {
Dec  1 05:13:45 np0005540825 flamboyant_kepler[260368]:            "devices": [
Dec  1 05:13:45 np0005540825 flamboyant_kepler[260368]:                "/dev/loop3"
Dec  1 05:13:45 np0005540825 flamboyant_kepler[260368]:            ],
Dec  1 05:13:45 np0005540825 flamboyant_kepler[260368]:            "lv_name": "ceph_lv0",
Dec  1 05:13:45 np0005540825 flamboyant_kepler[260368]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:13:45 np0005540825 flamboyant_kepler[260368]:            "lv_size": "21470642176",
Dec  1 05:13:45 np0005540825 flamboyant_kepler[260368]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 05:13:45 np0005540825 flamboyant_kepler[260368]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:13:45 np0005540825 flamboyant_kepler[260368]:            "name": "ceph_lv0",
Dec  1 05:13:45 np0005540825 flamboyant_kepler[260368]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:13:45 np0005540825 flamboyant_kepler[260368]:            "tags": {
Dec  1 05:13:45 np0005540825 flamboyant_kepler[260368]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:13:45 np0005540825 flamboyant_kepler[260368]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:13:45 np0005540825 flamboyant_kepler[260368]:                "ceph.cephx_lockbox_secret": "",
Dec  1 05:13:45 np0005540825 flamboyant_kepler[260368]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 05:13:45 np0005540825 flamboyant_kepler[260368]:                "ceph.cluster_name": "ceph",
Dec  1 05:13:45 np0005540825 flamboyant_kepler[260368]:                "ceph.crush_device_class": "",
Dec  1 05:13:45 np0005540825 flamboyant_kepler[260368]:                "ceph.encrypted": "0",
Dec  1 05:13:45 np0005540825 flamboyant_kepler[260368]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 05:13:45 np0005540825 flamboyant_kepler[260368]:                "ceph.osd_id": "1",
Dec  1 05:13:45 np0005540825 flamboyant_kepler[260368]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 05:13:45 np0005540825 flamboyant_kepler[260368]:                "ceph.type": "block",
Dec  1 05:13:45 np0005540825 flamboyant_kepler[260368]:                "ceph.vdo": "0",
Dec  1 05:13:45 np0005540825 flamboyant_kepler[260368]:                "ceph.with_tpm": "0"
Dec  1 05:13:45 np0005540825 flamboyant_kepler[260368]:            },
Dec  1 05:13:45 np0005540825 flamboyant_kepler[260368]:            "type": "block",
Dec  1 05:13:45 np0005540825 flamboyant_kepler[260368]:            "vg_name": "ceph_vg0"
Dec  1 05:13:45 np0005540825 flamboyant_kepler[260368]:        }
Dec  1 05:13:45 np0005540825 flamboyant_kepler[260368]:    ]
Dec  1 05:13:45 np0005540825 flamboyant_kepler[260368]: }
Dec  1 05:13:45 np0005540825 systemd[1]: libpod-2f605c6ea12ab0c1dd38591f435a85bfad4b284503f62d9ea3b78dc4b78a4574.scope: Deactivated successfully.
Dec  1 05:13:45 np0005540825 podman[260352]: 2025-12-01 10:13:45.105846553 +0000 UTC m=+0.602510599 container died 2f605c6ea12ab0c1dd38591f435a85bfad4b284503f62d9ea3b78dc4b78a4574 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_kepler, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:13:45 np0005540825 systemd[1]: var-lib-containers-storage-overlay-0fbd468aecdc4dc1c0fd19988299c81ad130bd8c0139c6cb2b489465ae2fde2d-merged.mount: Deactivated successfully.
Dec  1 05:13:45 np0005540825 podman[260352]: 2025-12-01 10:13:45.156519531 +0000 UTC m=+0.653183607 container remove 2f605c6ea12ab0c1dd38591f435a85bfad4b284503f62d9ea3b78dc4b78a4574 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_kepler, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  1 05:13:45 np0005540825 systemd[1]: libpod-conmon-2f605c6ea12ab0c1dd38591f435a85bfad4b284503f62d9ea3b78dc4b78a4574.scope: Deactivated successfully.
Dec  1 05:13:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:13:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:13:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:13:45.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:13:45 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v685: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 268 B/s rd, 0 op/s
Dec  1 05:13:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:13:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:13:45.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:13:45 np0005540825 podman[260479]: 2025-12-01 10:13:45.902107335 +0000 UTC m=+0.068558728 container create dde4630ff7c4da1556f4feff51dc2c09a86bb6be90127bbe0d88c76cdfd700c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_kare, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:13:45 np0005540825 systemd[1]: Started libpod-conmon-dde4630ff7c4da1556f4feff51dc2c09a86bb6be90127bbe0d88c76cdfd700c9.scope.
Dec  1 05:13:45 np0005540825 podman[260479]: 2025-12-01 10:13:45.874429533 +0000 UTC m=+0.040880976 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:13:45 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:13:46 np0005540825 podman[260479]: 2025-12-01 10:13:46.01312113 +0000 UTC m=+0.179572583 container init dde4630ff7c4da1556f4feff51dc2c09a86bb6be90127bbe0d88c76cdfd700c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:13:46 np0005540825 podman[260479]: 2025-12-01 10:13:46.027006322 +0000 UTC m=+0.193457695 container start dde4630ff7c4da1556f4feff51dc2c09a86bb6be90127bbe0d88c76cdfd700c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_kare, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  1 05:13:46 np0005540825 podman[260479]: 2025-12-01 10:13:46.030606658 +0000 UTC m=+0.197058061 container attach dde4630ff7c4da1556f4feff51dc2c09a86bb6be90127bbe0d88c76cdfd700c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_kare, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:13:46 np0005540825 funny_kare[260496]: 167 167
Dec  1 05:13:46 np0005540825 systemd[1]: libpod-dde4630ff7c4da1556f4feff51dc2c09a86bb6be90127bbe0d88c76cdfd700c9.scope: Deactivated successfully.
Dec  1 05:13:46 np0005540825 podman[260479]: 2025-12-01 10:13:46.037939665 +0000 UTC m=+0.204391068 container died dde4630ff7c4da1556f4feff51dc2c09a86bb6be90127bbe0d88c76cdfd700c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_kare, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  1 05:13:46 np0005540825 systemd[1]: var-lib-containers-storage-overlay-da0f27d11d489161df9b46efd47f15d0d7f7cdd422f534b2ba1d106309c61b2c-merged.mount: Deactivated successfully.
Dec  1 05:13:46 np0005540825 podman[260479]: 2025-12-01 10:13:46.107603272 +0000 UTC m=+0.274054675 container remove dde4630ff7c4da1556f4feff51dc2c09a86bb6be90127bbe0d88c76cdfd700c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  1 05:13:46 np0005540825 systemd[1]: libpod-conmon-dde4630ff7c4da1556f4feff51dc2c09a86bb6be90127bbe0d88c76cdfd700c9.scope: Deactivated successfully.
Dec  1 05:13:46 np0005540825 podman[260518]: 2025-12-01 10:13:46.376781897 +0000 UTC m=+0.073164572 container create 5191332e41fc6e629ef7c77564aea4595736aed6f924e803b969062939c08b28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_mendeleev, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True)
Dec  1 05:13:46 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Scheduled restart job, restart counter is at 8.
Dec  1 05:13:46 np0005540825 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.pytvsu for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 05:13:46 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Consumed 2.412s CPU time.
Dec  1 05:13:46 np0005540825 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.pytvsu for 365f19c2-81e5-5edd-b6b4-280555214d3a...
Dec  1 05:13:46 np0005540825 systemd[1]: Started libpod-conmon-5191332e41fc6e629ef7c77564aea4595736aed6f924e803b969062939c08b28.scope.
Dec  1 05:13:46 np0005540825 podman[260518]: 2025-12-01 10:13:46.34780792 +0000 UTC m=+0.044190635 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:13:46 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:13:46 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32006fa7d7221e7cf372323a6d1ebcc2d850f078b209a4e00866edbf31fe5253/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:13:46 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32006fa7d7221e7cf372323a6d1ebcc2d850f078b209a4e00866edbf31fe5253/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:13:46 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32006fa7d7221e7cf372323a6d1ebcc2d850f078b209a4e00866edbf31fe5253/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:13:46 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32006fa7d7221e7cf372323a6d1ebcc2d850f078b209a4e00866edbf31fe5253/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:13:46 np0005540825 nova_compute[256151]: 2025-12-01 10:13:46.488 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:13:46 np0005540825 podman[260518]: 2025-12-01 10:13:46.508186288 +0000 UTC m=+0.204568923 container init 5191332e41fc6e629ef7c77564aea4595736aed6f924e803b969062939c08b28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_mendeleev, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:13:46 np0005540825 podman[260518]: 2025-12-01 10:13:46.516717227 +0000 UTC m=+0.213099872 container start 5191332e41fc6e629ef7c77564aea4595736aed6f924e803b969062939c08b28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_mendeleev, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  1 05:13:46 np0005540825 podman[260518]: 2025-12-01 10:13:46.520118788 +0000 UTC m=+0.216501433 container attach 5191332e41fc6e629ef7c77564aea4595736aed6f924e803b969062939c08b28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_mendeleev, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:13:46 np0005540825 podman[260532]: 2025-12-01 10:13:46.522292857 +0000 UTC m=+0.086181691 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  1 05:13:46 np0005540825 podman[260611]: 2025-12-01 10:13:46.660144062 +0000 UTC m=+0.038720559 container create 4f1523bf49bcff099e1266e897b2b41b51c0ec03a98f241add23f9fa9f6c54ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  1 05:13:46 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a7e1b2d49d1439f0720904e007db259ff877d76a9cc697cf7e06079b825f4e5/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec  1 05:13:46 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a7e1b2d49d1439f0720904e007db259ff877d76a9cc697cf7e06079b825f4e5/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:13:46 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a7e1b2d49d1439f0720904e007db259ff877d76a9cc697cf7e06079b825f4e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:13:46 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a7e1b2d49d1439f0720904e007db259ff877d76a9cc697cf7e06079b825f4e5/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.pytvsu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:13:46 np0005540825 podman[260611]: 2025-12-01 10:13:46.723060578 +0000 UTC m=+0.101637095 container init 4f1523bf49bcff099e1266e897b2b41b51c0ec03a98f241add23f9fa9f6c54ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  1 05:13:46 np0005540825 podman[260611]: 2025-12-01 10:13:46.728094863 +0000 UTC m=+0.106671350 container start 4f1523bf49bcff099e1266e897b2b41b51c0ec03a98f241add23f9fa9f6c54ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  1 05:13:46 np0005540825 bash[260611]: 4f1523bf49bcff099e1266e897b2b41b51c0ec03a98f241add23f9fa9f6c54ae
Dec  1 05:13:46 np0005540825 podman[260611]: 2025-12-01 10:13:46.641525732 +0000 UTC m=+0.020102239 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:13:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:13:46 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec  1 05:13:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:13:46 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec  1 05:13:46 np0005540825 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.pytvsu for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 05:13:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:13:46 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec  1 05:13:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:13:46 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec  1 05:13:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:13:46 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec  1 05:13:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:13:46 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec  1 05:13:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:13:46 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec  1 05:13:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:13:46 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:13:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:13:47.169Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:13:47 np0005540825 nova_compute[256151]: 2025-12-01 10:13:47.186 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:13:47 np0005540825 nova_compute[256151]: 2025-12-01 10:13:47.187 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 05:13:47 np0005540825 nova_compute[256151]: 2025-12-01 10:13:47.187 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 05:13:47 np0005540825 lvm[260737]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 05:13:47 np0005540825 lvm[260737]: VG ceph_vg0 finished
Dec  1 05:13:47 np0005540825 distracted_mendeleev[260538]: {}
Dec  1 05:13:47 np0005540825 systemd[1]: libpod-5191332e41fc6e629ef7c77564aea4595736aed6f924e803b969062939c08b28.scope: Deactivated successfully.
Dec  1 05:13:47 np0005540825 systemd[1]: libpod-5191332e41fc6e629ef7c77564aea4595736aed6f924e803b969062939c08b28.scope: Consumed 1.173s CPU time.
Dec  1 05:13:47 np0005540825 nova_compute[256151]: 2025-12-01 10:13:47.309 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 05:13:47 np0005540825 nova_compute[256151]: 2025-12-01 10:13:47.310 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:13:47 np0005540825 podman[260741]: 2025-12-01 10:13:47.345721347 +0000 UTC m=+0.042881210 container died 5191332e41fc6e629ef7c77564aea4595736aed6f924e803b969062939c08b28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_mendeleev, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  1 05:13:47 np0005540825 systemd[1]: var-lib-containers-storage-overlay-32006fa7d7221e7cf372323a6d1ebcc2d850f078b209a4e00866edbf31fe5253-merged.mount: Deactivated successfully.
Dec  1 05:13:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:13:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:13:47.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:13:47 np0005540825 podman[260741]: 2025-12-01 10:13:47.40255813 +0000 UTC m=+0.099717963 container remove 5191332e41fc6e629ef7c77564aea4595736aed6f924e803b969062939c08b28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  1 05:13:47 np0005540825 systemd[1]: libpod-conmon-5191332e41fc6e629ef7c77564aea4595736aed6f924e803b969062939c08b28.scope: Deactivated successfully.
Dec  1 05:13:47 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 05:13:47 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:13:47 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 05:13:47 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v686: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 179 B/s rd, 0 op/s
Dec  1 05:13:47 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:13:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:13:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:13:47.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:13:48 np0005540825 nova_compute[256151]: 2025-12-01 10:13:48.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:13:48 np0005540825 nova_compute[256151]: 2025-12-01 10:13:48.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:13:48 np0005540825 nova_compute[256151]: 2025-12-01 10:13:48.028 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:13:48 np0005540825 nova_compute[256151]: 2025-12-01 10:13:48.064 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:13:48 np0005540825 nova_compute[256151]: 2025-12-01 10:13:48.065 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:13:48 np0005540825 nova_compute[256151]: 2025-12-01 10:13:48.065 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:13:48 np0005540825 nova_compute[256151]: 2025-12-01 10:13:48.066 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 05:13:48 np0005540825 nova_compute[256151]: 2025-12-01 10:13:48.066 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:13:48 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:13:48 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3593686561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:13:48 np0005540825 nova_compute[256151]: 2025-12-01 10:13:48.474 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:13:48 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:13:48 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:13:48 np0005540825 nova_compute[256151]: 2025-12-01 10:13:48.746 256155 WARNING nova.virt.libvirt.driver [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 05:13:48 np0005540825 nova_compute[256151]: 2025-12-01 10:13:48.748 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4869MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 05:13:48 np0005540825 nova_compute[256151]: 2025-12-01 10:13:48.749 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:13:48 np0005540825 nova_compute[256151]: 2025-12-01 10:13:48.749 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:13:48 np0005540825 nova_compute[256151]: 2025-12-01 10:13:48.893 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 05:13:48 np0005540825 nova_compute[256151]: 2025-12-01 10:13:48.893 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 05:13:48 np0005540825 nova_compute[256151]: 2025-12-01 10:13:48.913 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:13:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:13:48.984Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:13:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:13:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:13:49.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:13:49 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:13:49 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/703416877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:13:49 np0005540825 nova_compute[256151]: 2025-12-01 10:13:49.473 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:13:49 np0005540825 nova_compute[256151]: 2025-12-01 10:13:49.479 256155 DEBUG nova.compute.provider_tree [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed in ProviderTree for provider: 5efe20fe-1981-4bd9-8786-d9fddc89a5ae update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 05:13:49 np0005540825 nova_compute[256151]: 2025-12-01 10:13:49.511 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 05:13:49 np0005540825 nova_compute[256151]: 2025-12-01 10:13:49.514 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 05:13:49 np0005540825 nova_compute[256151]: 2025-12-01 10:13:49.514 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:13:49 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v687: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 179 B/s rd, 0 op/s
Dec  1 05:13:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:13:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:13:49.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:13:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:13:50 np0005540825 nova_compute[256151]: 2025-12-01 10:13:50.514 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:13:50 np0005540825 nova_compute[256151]: 2025-12-01 10:13:50.514 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:13:50 np0005540825 nova_compute[256151]: 2025-12-01 10:13:50.515 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:13:50 np0005540825 nova_compute[256151]: 2025-12-01 10:13:50.515 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 05:13:51 np0005540825 nova_compute[256151]: 2025-12-01 10:13:51.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:13:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:13:51] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec  1 05:13:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:13:51] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Dec  1 05:13:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:13:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:13:51.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:13:51 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v688: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 627 B/s wr, 2 op/s
Dec  1 05:13:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:13:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:13:51.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:13:52 np0005540825 podman[260831]: 2025-12-01 10:13:52.222511747 +0000 UTC m=+0.070747027 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 05:13:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:13:52 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:13:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:13:52 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:13:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:13:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:13:53.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:13:53 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v689: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec  1 05:13:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:13:53.616Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:13:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:13:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:13:53.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:13:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:13:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:13:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:13:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:13:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:13:55.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:13:55 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v690: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 1 op/s
Dec  1 05:13:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:13:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:13:55.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:13:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:13:57.170Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:13:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:13:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:13:57.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:13:57 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v691: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Dec  1 05:13:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:13:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:13:57.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:13:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:13:58 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  1 05:13:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:13:58 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec  1 05:13:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:13:58 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec  1 05:13:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:13:58 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec  1 05:13:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:13:58 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec  1 05:13:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:13:58 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec  1 05:13:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:13:58 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec  1 05:13:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:13:58 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  1 05:13:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:13:58 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  1 05:13:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:13:58 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  1 05:13:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:13:58 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec  1 05:13:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:13:58 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  1 05:13:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:13:58 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec  1 05:13:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:13:58 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec  1 05:13:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:13:58 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec  1 05:13:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:13:58 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec  1 05:13:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:13:58 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec  1 05:13:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:13:58 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec  1 05:13:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:13:58 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec  1 05:13:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:13:58 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec  1 05:13:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:13:58 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec  1 05:13:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:13:58 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec  1 05:13:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:13:58 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec  1 05:13:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:13:58 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec  1 05:13:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:13:58 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  1 05:13:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:13:58 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec  1 05:13:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:13:58 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  1 05:13:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:13:58.985Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:13:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:13:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:13:59.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:13:59 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v692: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec  1 05:13:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:13:59 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9da4000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:13:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:13:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:13:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:13:59.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:14:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:14:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:14:00 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d98001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:14:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:14:00 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d7c000b60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:14:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:14:01] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Dec  1 05:14:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:14:01] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Dec  1 05:14:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:14:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:14:01.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:14:01 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v693: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec  1 05:14:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:14:01 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d98001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:14:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:14:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:14:01.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:14:02 np0005540825 podman[260876]: 2025-12-01 10:14:02.28811429 +0000 UTC m=+0.112154267 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  1 05:14:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/101402 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  1 05:14:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:14:02 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d7c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:14:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:14:02 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d74000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:14:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:14:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:14:03.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:14:03 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v694: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec  1 05:14:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:14:03.617Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:14:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:14:03.618Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:14:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:14:03.618Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:14:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:14:03 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d98002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:14:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:14:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:14:03.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:14:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:14:04 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d80001140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:14:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:14:04 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d7c001b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:14:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:14:04.568 163291 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:14:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:14:04.569 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:14:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:14:04.569 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:14:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:14:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:14:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:14:05.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:14:05 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v695: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec  1 05:14:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:14:05 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d74001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:14:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:14:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:14:05.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:14:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:14:06 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d98002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:14:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:14:06 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d80001c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:14:07 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  1 05:14:07 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/818614316' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  1 05:14:07 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  1 05:14:07 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/818614316' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  1 05:14:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:14:07.172Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:14:07 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:07 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:14:07 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:14:07.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:14:07 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v696: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec  1 05:14:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:14:07 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d80001c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:14:07 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:07 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:14:07 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:14:07.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:14:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:14:08 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d74001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:14:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:14:08 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d98002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:14:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:14:08.986Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:14:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:14:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:14:09.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:14:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:14:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:14:09 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v697: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec  1 05:14:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:14:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:14:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:14:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:14:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:14:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:14:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:14:09 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d7c001b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:14:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:14:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:14:09.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:14:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:14:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:14:10 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d80001c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:14:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:14:10 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d74001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:14:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:14:11] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Dec  1 05:14:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:14:11] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Dec  1 05:14:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:14:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:14:11.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:14:11 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v698: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 425 B/s rd, 85 B/s wr, 0 op/s
Dec  1 05:14:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:14:11 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d98002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:14:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:14:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:14:11.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:14:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:14:12 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d7c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:14:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:14:12 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d80001c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:14:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/101412 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 05:14:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:14:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:14:13.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:14:13 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v699: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  1 05:14:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:14:13.619Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:14:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:14:13.619Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:14:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:14:13.619Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:14:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:14:13 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d74002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:14:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:14:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:14:13.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:14:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:14:14 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d98002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:14:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:14:14 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d98002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:14:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:14:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:14:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:14:15.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:14:15 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v700: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  1 05:14:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:14:15 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d800034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:14:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:14:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:14:15.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:14:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=404 latency=0.002000055s ======
Dec  1 05:14:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:14:15.846 +0000] "GET /info HTTP/1.1" 404 152 - "python-urllib3/1.26.5" - latency=0.002000055s
Dec  1 05:14:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:14:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - - [01/Dec/2025:10:14:15.861 +0000] "GET /swift/healthcheck HTTP/1.1" 200 0 - "python-urllib3/1.26.5" - latency=0.001000027s
Dec  1 05:14:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:14:16 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d74002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:14:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:14:16 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d7c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:14:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:14:17.173Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:14:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:14:17.173Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:14:17 np0005540825 podman[260944]: 2025-12-01 10:14:17.243753428 +0000 UTC m=+0.102121122 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:14:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:14:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:14:17.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:14:17 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v701: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  1 05:14:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:14:17 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d98002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:14:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:14:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:14:17.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:14:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:14:18 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d800034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:14:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:14:18 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d800034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:14:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:14:18.987Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:14:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:14:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:14:19.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:14:19 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v702: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  1 05:14:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:14:19 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d7c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:14:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:14:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:14:19.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:14:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Dec  1 05:14:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Dec  1 05:14:20 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Dec  1 05:14:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:14:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:14:20 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d98002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:14:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[260627]: 01/12/2025 10:14:20 : epoch 692d6a5a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d740039c0 fd 38 proxy ignored for local
Dec  1 05:14:20 np0005540825 kernel: ganesha.nfsd[260863]: segfault at 50 ip 00007f9e5240e32e sp 00007f9e16ffc210 error 4 in libntirpc.so.5.8[7f9e523f3000+2c000] likely on CPU 0 (core 0, socket 0)
Dec  1 05:14:20 np0005540825 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec  1 05:14:20 np0005540825 systemd[1]: Started Process Core Dump (PID 260967/UID 0).
Dec  1 05:14:21 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Dec  1 05:14:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:14:21] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Dec  1 05:14:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:14:21] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Dec  1 05:14:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:14:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:14:21.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:14:21 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v704: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 614 B/s wr, 1 op/s
Dec  1 05:14:21 np0005540825 systemd-coredump[260968]: Process 260637 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 46:#012#0  0x00007f9e5240e32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Dec  1 05:14:21 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Dec  1 05:14:21 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Dec  1 05:14:21 np0005540825 systemd[1]: systemd-coredump@8-260967-0.service: Deactivated successfully.
Dec  1 05:14:21 np0005540825 systemd[1]: systemd-coredump@8-260967-0.service: Consumed 1.201s CPU time.
Dec  1 05:14:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:14:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:14:21.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:14:21 np0005540825 podman[260975]: 2025-12-01 10:14:21.785485118 +0000 UTC m=+0.023117035 container died 4f1523bf49bcff099e1266e897b2b41b51c0ec03a98f241add23f9fa9f6c54ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 05:14:21 np0005540825 systemd[1]: var-lib-containers-storage-overlay-9a7e1b2d49d1439f0720904e007db259ff877d76a9cc697cf7e06079b825f4e5-merged.mount: Deactivated successfully.
Dec  1 05:14:21 np0005540825 podman[260975]: 2025-12-01 10:14:21.935525593 +0000 UTC m=+0.173157500 container remove 4f1523bf49bcff099e1266e897b2b41b51c0ec03a98f241add23f9fa9f6c54ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:14:21 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Main process exited, code=exited, status=139/n/a
Dec  1 05:14:22 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Failed with result 'exit-code'.
Dec  1 05:14:22 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Consumed 1.477s CPU time.
Dec  1 05:14:22 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Dec  1 05:14:22 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Dec  1 05:14:22 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Dec  1 05:14:23 np0005540825 podman[261041]: 2025-12-01 10:14:23.230798757 +0000 UTC m=+0.094871985 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  1 05:14:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:14:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:14:23.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:14:23 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v707: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 1023 B/s wr, 2 op/s
Dec  1 05:14:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:14:23.620Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:14:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:14:23.621Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:14:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:14:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:14:23.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:14:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:14:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:14:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Dec  1 05:14:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Dec  1 05:14:24 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Dec  1 05:14:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:14:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:14:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:14:25.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:14:25 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v709: 353 pgs: 353 active+clean; 13 MiB data, 170 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 2.3 MiB/s wr, 36 op/s
Dec  1 05:14:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:14:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:14:25.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:14:26 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/101426 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 05:14:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:14:27.174Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:14:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:14:27.175Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:14:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:14:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:14:27.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:14:27 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v710: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 6.8 MiB/s wr, 64 op/s
Dec  1 05:14:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:14:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:14:27.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:14:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:14:28.989Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:14:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:14:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:14:29.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:14:29 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v711: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 5.1 MiB/s wr, 48 op/s
Dec  1 05:14:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:14:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:14:29.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:14:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:14:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Dec  1 05:14:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Dec  1 05:14:30 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Dec  1 05:14:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:14:31] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Dec  1 05:14:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:14:31] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Dec  1 05:14:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:14:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:14:31.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:14:31 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v713: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 5.1 MiB/s wr, 48 op/s
Dec  1 05:14:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:14:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:14:31.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:14:32 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Scheduled restart job, restart counter is at 9.
Dec  1 05:14:32 np0005540825 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.pytvsu for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 05:14:32 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Consumed 1.477s CPU time.
Dec  1 05:14:32 np0005540825 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.pytvsu for 365f19c2-81e5-5edd-b6b4-280555214d3a...
Dec  1 05:14:32 np0005540825 podman[261072]: 2025-12-01 10:14:32.517667424 +0000 UTC m=+0.131234189 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  1 05:14:32 np0005540825 podman[261149]: 2025-12-01 10:14:32.735749988 +0000 UTC m=+0.072395338 container create fed1319d819fd5898c56a9ef38aa6debb1796c9bcc92dd3c6d2b291b412be369 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:14:32 np0005540825 podman[261149]: 2025-12-01 10:14:32.707059172 +0000 UTC m=+0.043704562 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:14:32 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1174d25109547b25d1a621aae99b63307b10f957bab37254fea28fbcf9bcb428/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec  1 05:14:32 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1174d25109547b25d1a621aae99b63307b10f957bab37254fea28fbcf9bcb428/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:14:32 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1174d25109547b25d1a621aae99b63307b10f957bab37254fea28fbcf9bcb428/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:14:32 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1174d25109547b25d1a621aae99b63307b10f957bab37254fea28fbcf9bcb428/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.pytvsu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:14:32 np0005540825 podman[261149]: 2025-12-01 10:14:32.820775606 +0000 UTC m=+0.157420986 container init fed1319d819fd5898c56a9ef38aa6debb1796c9bcc92dd3c6d2b291b412be369 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec  1 05:14:32 np0005540825 podman[261149]: 2025-12-01 10:14:32.830629353 +0000 UTC m=+0.167274703 container start fed1319d819fd5898c56a9ef38aa6debb1796c9bcc92dd3c6d2b291b412be369 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Dec  1 05:14:32 np0005540825 bash[261149]: fed1319d819fd5898c56a9ef38aa6debb1796c9bcc92dd3c6d2b291b412be369
Dec  1 05:14:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:32 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec  1 05:14:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:32 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec  1 05:14:32 np0005540825 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.pytvsu for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 05:14:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:32 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec  1 05:14:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:32 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec  1 05:14:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:32 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec  1 05:14:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:32 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec  1 05:14:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:32 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec  1 05:14:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:32 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:14:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:14:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:14:33.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:14:33 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v714: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 4.6 MiB/s wr, 43 op/s
Dec  1 05:14:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:14:33.622Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:14:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:14:33.622Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:14:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:14:33.622Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:14:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:14:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:14:33.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:14:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:14:35 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Dec  1 05:14:35 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:14:35.320640) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  1 05:14:35 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Dec  1 05:14:35 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584075320690, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 975, "num_deletes": 256, "total_data_size": 1630015, "memory_usage": 1659456, "flush_reason": "Manual Compaction"}
Dec  1 05:14:35 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Dec  1 05:14:35 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584075340061, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 1603094, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22678, "largest_seqno": 23652, "table_properties": {"data_size": 1598144, "index_size": 2474, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10461, "raw_average_key_size": 19, "raw_value_size": 1588131, "raw_average_value_size": 2914, "num_data_blocks": 108, "num_entries": 545, "num_filter_entries": 545, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764584001, "oldest_key_time": 1764584001, "file_creation_time": 1764584075, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Dec  1 05:14:35 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 19485 microseconds, and 7492 cpu microseconds.
Dec  1 05:14:35 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 05:14:35 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:14:35.340123) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 1603094 bytes OK
Dec  1 05:14:35 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:14:35.340151) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Dec  1 05:14:35 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:14:35.341983) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Dec  1 05:14:35 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:14:35.342006) EVENT_LOG_v1 {"time_micros": 1764584075341999, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  1 05:14:35 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:14:35.342030) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  1 05:14:35 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 1625430, prev total WAL file size 1625430, number of live WAL files 2.
Dec  1 05:14:35 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:14:35 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:14:35.343226) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323532' seq:72057594037927935, type:22 .. '6C6F676D00353034' seq:0, type:0; will stop at (end)
Dec  1 05:14:35 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  1 05:14:35 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(1565KB)], [50(11MB)]
Dec  1 05:14:35 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584075343344, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 13556408, "oldest_snapshot_seqno": -1}
Dec  1 05:14:35 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5313 keys, 13364686 bytes, temperature: kUnknown
Dec  1 05:14:35 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584075432875, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 13364686, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13329047, "index_size": 21257, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13317, "raw_key_size": 136188, "raw_average_key_size": 25, "raw_value_size": 13232512, "raw_average_value_size": 2490, "num_data_blocks": 865, "num_entries": 5313, "num_filter_entries": 5313, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764582410, "oldest_key_time": 0, "file_creation_time": 1764584075, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Dec  1 05:14:35 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 05:14:35 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:14:35.433228) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 13364686 bytes
Dec  1 05:14:35 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:14:35.435060) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 151.2 rd, 149.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 11.4 +0.0 blob) out(12.7 +0.0 blob), read-write-amplify(16.8) write-amplify(8.3) OK, records in: 5847, records dropped: 534 output_compression: NoCompression
Dec  1 05:14:35 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:14:35.435092) EVENT_LOG_v1 {"time_micros": 1764584075435078, "job": 26, "event": "compaction_finished", "compaction_time_micros": 89660, "compaction_time_cpu_micros": 49360, "output_level": 6, "num_output_files": 1, "total_output_size": 13364686, "num_input_records": 5847, "num_output_records": 5313, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  1 05:14:35 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:14:35 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584075435747, "job": 26, "event": "table_file_deletion", "file_number": 52}
Dec  1 05:14:35 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:14:35 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584075439800, "job": 26, "event": "table_file_deletion", "file_number": 50}
Dec  1 05:14:35 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:14:35.343064) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:14:35 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:14:35.439913) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:14:35 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:14:35.439922) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:14:35 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:14:35.439926) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:14:35 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:14:35.439930) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:14:35 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:14:35.439934) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:14:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:14:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:14:35.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:14:35 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v715: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.8 MiB/s wr, 21 op/s
Dec  1 05:14:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:14:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:14:35.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:14:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:14:37.176Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:14:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:14:37.176Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:14:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:14:37.177Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:14:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:14:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:14:37.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:14:37 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v716: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 409 B/s wr, 1 op/s
Dec  1 05:14:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:14:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:14:37.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:14:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:38 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Dec  1 05:14:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:38 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Dec  1 05:14:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:38 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:14:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:38 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:14:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:38 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:14:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:14:38.990Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:14:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:39 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:14:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:39 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:14:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:39 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:14:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:39 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:14:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:39 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:14:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:39 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:14:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:39 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:14:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:14:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:14:39.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_10:14:39
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'default.rgw.log', 'images', '.nfs', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'volumes']
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 05:14:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:14:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v717: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 409 B/s wr, 1 op/s
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:14:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:14:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:14:39.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:14:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:14:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:14:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:14:41] "GET /metrics HTTP/1.1" 200 48482 "" "Prometheus/2.51.0"
Dec  1 05:14:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:14:41] "GET /metrics HTTP/1.1" 200 48482 "" "Prometheus/2.51.0"
Dec  1 05:14:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:14:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:14:41.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:14:41 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v718: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Dec  1 05:14:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:14:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:14:41.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:14:42 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/101442 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  1 05:14:42 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:14:42.729 163291 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '36:10:da', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '4e:5c:35:98:90:37'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 05:14:42 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:14:42.731 163291 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 05:14:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:14:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:14:43.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:14:43 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v719: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Dec  1 05:14:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:14:43.623Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:14:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:14:43.623Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:14:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:14:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:14:43.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:14:45 np0005540825 nova_compute[256151]: 2025-12-01 10:14:45.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:14:45 np0005540825 nova_compute[256151]: 2025-12-01 10:14:45.028 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  1 05:14:45 np0005540825 nova_compute[256151]: 2025-12-01 10:14:45.055 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  1 05:14:45 np0005540825 nova_compute[256151]: 2025-12-01 10:14:45.057 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:14:45 np0005540825 nova_compute[256151]: 2025-12-01 10:14:45.058 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  1 05:14:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:45 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000026:nfs.cephfs.2: -2
Dec  1 05:14:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:45 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  1 05:14:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:45 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec  1 05:14:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:45 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec  1 05:14:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:45 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec  1 05:14:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:45 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec  1 05:14:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:45 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec  1 05:14:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:45 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec  1 05:14:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:45 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  1 05:14:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:45 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  1 05:14:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:45 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  1 05:14:45 np0005540825 nova_compute[256151]: 2025-12-01 10:14:45.072 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:14:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:45 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec  1 05:14:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:45 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  1 05:14:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:45 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec  1 05:14:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:45 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec  1 05:14:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:45 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec  1 05:14:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:45 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec  1 05:14:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:45 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec  1 05:14:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:45 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec  1 05:14:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:45 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec  1 05:14:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:45 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec  1 05:14:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:45 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec  1 05:14:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:45 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec  1 05:14:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:45 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec  1 05:14:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:45 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec  1 05:14:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:45 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  1 05:14:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:45 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec  1 05:14:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:45 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  1 05:14:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:14:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:14:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:14:45.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:14:45 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v720: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Dec  1 05:14:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:45 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a24000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:14:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:14:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:14:45.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:14:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:46 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a24000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:14:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:46 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59fc000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:14:47 np0005540825 nova_compute[256151]: 2025-12-01 10:14:47.083 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:14:47 np0005540825 nova_compute[256151]: 2025-12-01 10:14:47.084 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 05:14:47 np0005540825 nova_compute[256151]: 2025-12-01 10:14:47.085 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 05:14:47 np0005540825 nova_compute[256151]: 2025-12-01 10:14:47.100 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 05:14:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:14:47.178Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:14:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:14:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:14:47.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:14:47 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v721: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Dec  1 05:14:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:47 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a20002060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:14:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:14:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:14:47.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:14:48 np0005540825 nova_compute[256151]: 2025-12-01 10:14:48.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:14:48 np0005540825 nova_compute[256151]: 2025-12-01 10:14:48.028 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:14:48 np0005540825 podman[261287]: 2025-12-01 10:14:48.079234292 +0000 UTC m=+0.067831275 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 05:14:48 np0005540825 kernel: ganesha.nfsd[261258]: segfault at 50 ip 00007f5ad1c2232e sp 00007f5a8affc210 error 4 in libntirpc.so.5.8[7f5ad1c07000+2c000] likely on CPU 0 (core 0, socket 0)
Dec  1 05:14:48 np0005540825 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec  1 05:14:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[261165]: 01/12/2025 10:14:48 : epoch 692d6a88 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59f4000b60 fd 38 proxy ignored for local
Dec  1 05:14:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/101448 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  1 05:14:48 np0005540825 systemd[1]: Started Process Core Dump (PID 261351/UID 0).
Dec  1 05:14:48 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:14:48 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:14:48 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 05:14:48 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:14:48 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v722: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Dec  1 05:14:48 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 05:14:48 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:14:48 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 05:14:48 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:14:48 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 05:14:48 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 05:14:48 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 05:14:48 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:14:48 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:14:48 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:14:48 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:14:48 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:14:48 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:14:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:14:48.991Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:14:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:14:48.992Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:14:49 np0005540825 nova_compute[256151]: 2025-12-01 10:14:49.023 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:14:49 np0005540825 podman[261457]: 2025-12-01 10:14:49.380928759 +0000 UTC m=+0.048921143 container create 840f7cce804943a459e95e7a8a7041cfabc2aeea9260421e11955f67d003681b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_noyce, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:14:49 np0005540825 systemd[1]: Started libpod-conmon-840f7cce804943a459e95e7a8a7041cfabc2aeea9260421e11955f67d003681b.scope.
Dec  1 05:14:49 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:14:49 np0005540825 podman[261457]: 2025-12-01 10:14:49.360076955 +0000 UTC m=+0.028069369 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:14:49 np0005540825 podman[261457]: 2025-12-01 10:14:49.470441539 +0000 UTC m=+0.138433963 container init 840f7cce804943a459e95e7a8a7041cfabc2aeea9260421e11955f67d003681b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_noyce, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  1 05:14:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:14:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:14:49.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:14:49 np0005540825 podman[261457]: 2025-12-01 10:14:49.48306842 +0000 UTC m=+0.151060804 container start 840f7cce804943a459e95e7a8a7041cfabc2aeea9260421e11955f67d003681b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_noyce, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  1 05:14:49 np0005540825 podman[261457]: 2025-12-01 10:14:49.486529704 +0000 UTC m=+0.154522138 container attach 840f7cce804943a459e95e7a8a7041cfabc2aeea9260421e11955f67d003681b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_noyce, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:14:49 np0005540825 wizardly_noyce[261473]: 167 167
Dec  1 05:14:49 np0005540825 systemd[1]: libpod-840f7cce804943a459e95e7a8a7041cfabc2aeea9260421e11955f67d003681b.scope: Deactivated successfully.
Dec  1 05:14:49 np0005540825 conmon[261473]: conmon 840f7cce804943a459e9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-840f7cce804943a459e95e7a8a7041cfabc2aeea9260421e11955f67d003681b.scope/container/memory.events
Dec  1 05:14:49 np0005540825 podman[261457]: 2025-12-01 10:14:49.492973788 +0000 UTC m=+0.160966172 container died 840f7cce804943a459e95e7a8a7041cfabc2aeea9260421e11955f67d003681b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_noyce, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  1 05:14:49 np0005540825 systemd[1]: var-lib-containers-storage-overlay-433b540ead3030575657a78972b637796274c194e2e204b09d5e558518b7d10e-merged.mount: Deactivated successfully.
Dec  1 05:14:49 np0005540825 podman[261457]: 2025-12-01 10:14:49.539518926 +0000 UTC m=+0.207511310 container remove 840f7cce804943a459e95e7a8a7041cfabc2aeea9260421e11955f67d003681b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_noyce, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:14:49 np0005540825 systemd[1]: libpod-conmon-840f7cce804943a459e95e7a8a7041cfabc2aeea9260421e11955f67d003681b.scope: Deactivated successfully.
Dec  1 05:14:49 np0005540825 systemd-coredump[261352]: Process 261169 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 52:#012#0  0x00007f5ad1c2232e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Dec  1 05:14:49 np0005540825 podman[261500]: 2025-12-01 10:14:49.761846046 +0000 UTC m=+0.061661108 container create 20c981729702ec7d084fe5ab2615a935b84d751476eea3a7b659cfcc10d33378 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_shannon, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  1 05:14:49 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:14:49 np0005540825 systemd[1]: systemd-coredump@9-261351-0.service: Deactivated successfully.
Dec  1 05:14:49 np0005540825 systemd[1]: systemd-coredump@9-261351-0.service: Consumed 1.193s CPU time.
Dec  1 05:14:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:14:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:14:49.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:14:49 np0005540825 systemd[1]: Started libpod-conmon-20c981729702ec7d084fe5ab2615a935b84d751476eea3a7b659cfcc10d33378.scope.
Dec  1 05:14:49 np0005540825 podman[261500]: 2025-12-01 10:14:49.735980307 +0000 UTC m=+0.035795449 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:14:49 np0005540825 podman[261520]: 2025-12-01 10:14:49.861361686 +0000 UTC m=+0.033644890 container died fed1319d819fd5898c56a9ef38aa6debb1796c9bcc92dd3c6d2b291b412be369 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:14:49 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:14:49 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45cf50480074ee0aadf2b88998b5245a1cc2cd92ea7d4dae67d94b9f202569b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:14:49 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45cf50480074ee0aadf2b88998b5245a1cc2cd92ea7d4dae67d94b9f202569b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:14:49 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45cf50480074ee0aadf2b88998b5245a1cc2cd92ea7d4dae67d94b9f202569b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:14:49 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45cf50480074ee0aadf2b88998b5245a1cc2cd92ea7d4dae67d94b9f202569b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:14:49 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45cf50480074ee0aadf2b88998b5245a1cc2cd92ea7d4dae67d94b9f202569b4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:14:49 np0005540825 systemd[1]: var-lib-containers-storage-overlay-1174d25109547b25d1a621aae99b63307b10f957bab37254fea28fbcf9bcb428-merged.mount: Deactivated successfully.
Dec  1 05:14:49 np0005540825 podman[261500]: 2025-12-01 10:14:49.906427944 +0000 UTC m=+0.206243096 container init 20c981729702ec7d084fe5ab2615a935b84d751476eea3a7b659cfcc10d33378 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  1 05:14:49 np0005540825 podman[261500]: 2025-12-01 10:14:49.917923875 +0000 UTC m=+0.217738977 container start 20c981729702ec7d084fe5ab2615a935b84d751476eea3a7b659cfcc10d33378 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_shannon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 05:14:49 np0005540825 podman[261500]: 2025-12-01 10:14:49.930281439 +0000 UTC m=+0.230096531 container attach 20c981729702ec7d084fe5ab2615a935b84d751476eea3a7b659cfcc10d33378 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  1 05:14:49 np0005540825 podman[261520]: 2025-12-01 10:14:49.937741041 +0000 UTC m=+0.110024265 container remove fed1319d819fd5898c56a9ef38aa6debb1796c9bcc92dd3c6d2b291b412be369 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  1 05:14:49 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Main process exited, code=exited, status=139/n/a
Dec  1 05:14:50 np0005540825 nova_compute[256151]: 2025-12-01 10:14:50.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:14:50 np0005540825 nova_compute[256151]: 2025-12-01 10:14:50.029 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:14:50 np0005540825 nova_compute[256151]: 2025-12-01 10:14:50.029 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 05:14:50 np0005540825 nova_compute[256151]: 2025-12-01 10:14:50.029 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:14:50 np0005540825 nova_compute[256151]: 2025-12-01 10:14:50.054 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:14:50 np0005540825 nova_compute[256151]: 2025-12-01 10:14:50.055 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:14:50 np0005540825 nova_compute[256151]: 2025-12-01 10:14:50.055 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:14:50 np0005540825 nova_compute[256151]: 2025-12-01 10:14:50.056 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 05:14:50 np0005540825 nova_compute[256151]: 2025-12-01 10:14:50.057 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:14:50 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Failed with result 'exit-code'.
Dec  1 05:14:50 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Consumed 1.561s CPU time.
Dec  1 05:14:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:14:50 np0005540825 wonderful_shannon[261525]: --> passed data devices: 0 physical, 1 LVM
Dec  1 05:14:50 np0005540825 wonderful_shannon[261525]: --> All data devices are unavailable
Dec  1 05:14:50 np0005540825 systemd[1]: libpod-20c981729702ec7d084fe5ab2615a935b84d751476eea3a7b659cfcc10d33378.scope: Deactivated successfully.
Dec  1 05:14:50 np0005540825 podman[261500]: 2025-12-01 10:14:50.345567525 +0000 UTC m=+0.645382587 container died 20c981729702ec7d084fe5ab2615a935b84d751476eea3a7b659cfcc10d33378 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_shannon, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec  1 05:14:50 np0005540825 systemd[1]: var-lib-containers-storage-overlay-45cf50480074ee0aadf2b88998b5245a1cc2cd92ea7d4dae67d94b9f202569b4-merged.mount: Deactivated successfully.
Dec  1 05:14:50 np0005540825 podman[261500]: 2025-12-01 10:14:50.397659273 +0000 UTC m=+0.697474335 container remove 20c981729702ec7d084fe5ab2615a935b84d751476eea3a7b659cfcc10d33378 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_shannon, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:14:50 np0005540825 systemd[1]: libpod-conmon-20c981729702ec7d084fe5ab2615a935b84d751476eea3a7b659cfcc10d33378.scope: Deactivated successfully.
Dec  1 05:14:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:14:50 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4241038156' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:14:50 np0005540825 nova_compute[256151]: 2025-12-01 10:14:50.585 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:14:50 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v723: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Dec  1 05:14:50 np0005540825 nova_compute[256151]: 2025-12-01 10:14:50.756 256155 WARNING nova.virt.libvirt.driver [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 05:14:50 np0005540825 nova_compute[256151]: 2025-12-01 10:14:50.758 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4887MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 05:14:50 np0005540825 nova_compute[256151]: 2025-12-01 10:14:50.758 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:14:50 np0005540825 nova_compute[256151]: 2025-12-01 10:14:50.758 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:14:50 np0005540825 nova_compute[256151]: 2025-12-01 10:14:50.848 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 05:14:50 np0005540825 nova_compute[256151]: 2025-12-01 10:14:50.848 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 05:14:50 np0005540825 nova_compute[256151]: 2025-12-01 10:14:50.897 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Refreshing inventories for resource provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  1 05:14:50 np0005540825 nova_compute[256151]: 2025-12-01 10:14:50.966 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Updating ProviderTree inventory for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  1 05:14:50 np0005540825 nova_compute[256151]: 2025-12-01 10:14:50.967 256155 DEBUG nova.compute.provider_tree [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Updating inventory in ProviderTree for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 05:14:50 np0005540825 nova_compute[256151]: 2025-12-01 10:14:50.980 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Refreshing aggregate associations for resource provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  1 05:14:51 np0005540825 podman[261705]: 2025-12-01 10:14:51.020818897 +0000 UTC m=+0.062003597 container create ed16ee11b20c10b72f316b6f42994c5926882512af90a3338ed156d9b69858f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_sanderson, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:14:51 np0005540825 nova_compute[256151]: 2025-12-01 10:14:51.031 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Refreshing trait associations for resource provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae, traits: HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE4A,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_MMX,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_BMI,HW_CPU_X86_SVM,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AMD_SVM,HW_CPU_X86_BMI2,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE42,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,HW_CPU_X86_ABM,COMPUTE_SECURITY_UEFI_SECURE_BOOT _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  1 05:14:51 np0005540825 nova_compute[256151]: 2025-12-01 10:14:51.053 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:14:51 np0005540825 systemd[1]: Started libpod-conmon-ed16ee11b20c10b72f316b6f42994c5926882512af90a3338ed156d9b69858f1.scope.
Dec  1 05:14:51 np0005540825 podman[261705]: 2025-12-01 10:14:50.99686856 +0000 UTC m=+0.038053340 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:14:51 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:14:51 np0005540825 podman[261705]: 2025-12-01 10:14:51.126094063 +0000 UTC m=+0.167278773 container init ed16ee11b20c10b72f316b6f42994c5926882512af90a3338ed156d9b69858f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_sanderson, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:14:51 np0005540825 podman[261705]: 2025-12-01 10:14:51.135469836 +0000 UTC m=+0.176654566 container start ed16ee11b20c10b72f316b6f42994c5926882512af90a3338ed156d9b69858f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  1 05:14:51 np0005540825 podman[261705]: 2025-12-01 10:14:51.140169643 +0000 UTC m=+0.181354363 container attach ed16ee11b20c10b72f316b6f42994c5926882512af90a3338ed156d9b69858f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_sanderson, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:14:51 np0005540825 flamboyant_sanderson[261722]: 167 167
Dec  1 05:14:51 np0005540825 systemd[1]: libpod-ed16ee11b20c10b72f316b6f42994c5926882512af90a3338ed156d9b69858f1.scope: Deactivated successfully.
Dec  1 05:14:51 np0005540825 podman[261705]: 2025-12-01 10:14:51.143465963 +0000 UTC m=+0.184650693 container died ed16ee11b20c10b72f316b6f42994c5926882512af90a3338ed156d9b69858f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  1 05:14:51 np0005540825 systemd[1]: var-lib-containers-storage-overlay-03ee96a9424f27aa022c58587eaba5adbd14d6e90cf566dc0770376faf8cb38c-merged.mount: Deactivated successfully.
Dec  1 05:14:51 np0005540825 podman[261705]: 2025-12-01 10:14:51.325280137 +0000 UTC m=+0.366464867 container remove ed16ee11b20c10b72f316b6f42994c5926882512af90a3338ed156d9b69858f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_sanderson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  1 05:14:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:14:51] "GET /metrics HTTP/1.1" 200 48482 "" "Prometheus/2.51.0"
Dec  1 05:14:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:14:51] "GET /metrics HTTP/1.1" 200 48482 "" "Prometheus/2.51.0"
Dec  1 05:14:51 np0005540825 systemd[1]: libpod-conmon-ed16ee11b20c10b72f316b6f42994c5926882512af90a3338ed156d9b69858f1.scope: Deactivated successfully.
Dec  1 05:14:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:14:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:14:51.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:14:51 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:14:51 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/905717349' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:14:51 np0005540825 podman[261765]: 2025-12-01 10:14:51.562967592 +0000 UTC m=+0.058112831 container create 29edfcaaa8d29aedf107f5cb1136873a194ba48be0322a416ff3f2f3c3ca23ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:14:51 np0005540825 nova_compute[256151]: 2025-12-01 10:14:51.571 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:14:51 np0005540825 nova_compute[256151]: 2025-12-01 10:14:51.579 256155 DEBUG nova.compute.provider_tree [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed in ProviderTree for provider: 5efe20fe-1981-4bd9-8786-d9fddc89a5ae update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 05:14:51 np0005540825 nova_compute[256151]: 2025-12-01 10:14:51.597 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 05:14:51 np0005540825 nova_compute[256151]: 2025-12-01 10:14:51.599 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 05:14:51 np0005540825 nova_compute[256151]: 2025-12-01 10:14:51.600 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:14:51 np0005540825 systemd[1]: Started libpod-conmon-29edfcaaa8d29aedf107f5cb1136873a194ba48be0322a416ff3f2f3c3ca23ed.scope.
Dec  1 05:14:51 np0005540825 podman[261765]: 2025-12-01 10:14:51.544255267 +0000 UTC m=+0.039400486 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:14:51 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:14:51 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1543de7a627bfbd1fd48ecfa77ea207fb988f1c5b1191efcb8a8061f36252ca6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:14:51 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1543de7a627bfbd1fd48ecfa77ea207fb988f1c5b1191efcb8a8061f36252ca6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:14:51 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1543de7a627bfbd1fd48ecfa77ea207fb988f1c5b1191efcb8a8061f36252ca6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:14:51 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1543de7a627bfbd1fd48ecfa77ea207fb988f1c5b1191efcb8a8061f36252ca6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:14:51 np0005540825 podman[261765]: 2025-12-01 10:14:51.665783062 +0000 UTC m=+0.160928311 container init 29edfcaaa8d29aedf107f5cb1136873a194ba48be0322a416ff3f2f3c3ca23ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  1 05:14:51 np0005540825 podman[261765]: 2025-12-01 10:14:51.678044193 +0000 UTC m=+0.173189402 container start 29edfcaaa8d29aedf107f5cb1136873a194ba48be0322a416ff3f2f3c3ca23ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_rosalind, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:14:51 np0005540825 podman[261765]: 2025-12-01 10:14:51.681846796 +0000 UTC m=+0.176992005 container attach 29edfcaaa8d29aedf107f5cb1136873a194ba48be0322a416ff3f2f3c3ca23ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_rosalind, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:14:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:14:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:14:51.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:14:51 np0005540825 optimistic_rosalind[261784]: {
Dec  1 05:14:51 np0005540825 optimistic_rosalind[261784]:    "1": [
Dec  1 05:14:51 np0005540825 optimistic_rosalind[261784]:        {
Dec  1 05:14:51 np0005540825 optimistic_rosalind[261784]:            "devices": [
Dec  1 05:14:51 np0005540825 optimistic_rosalind[261784]:                "/dev/loop3"
Dec  1 05:14:51 np0005540825 optimistic_rosalind[261784]:            ],
Dec  1 05:14:51 np0005540825 optimistic_rosalind[261784]:            "lv_name": "ceph_lv0",
Dec  1 05:14:51 np0005540825 optimistic_rosalind[261784]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:14:51 np0005540825 optimistic_rosalind[261784]:            "lv_size": "21470642176",
Dec  1 05:14:51 np0005540825 optimistic_rosalind[261784]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 05:14:51 np0005540825 optimistic_rosalind[261784]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:14:51 np0005540825 optimistic_rosalind[261784]:            "name": "ceph_lv0",
Dec  1 05:14:51 np0005540825 optimistic_rosalind[261784]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:14:51 np0005540825 optimistic_rosalind[261784]:            "tags": {
Dec  1 05:14:51 np0005540825 optimistic_rosalind[261784]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:14:51 np0005540825 optimistic_rosalind[261784]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:14:51 np0005540825 optimistic_rosalind[261784]:                "ceph.cephx_lockbox_secret": "",
Dec  1 05:14:51 np0005540825 optimistic_rosalind[261784]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 05:14:51 np0005540825 optimistic_rosalind[261784]:                "ceph.cluster_name": "ceph",
Dec  1 05:14:51 np0005540825 optimistic_rosalind[261784]:                "ceph.crush_device_class": "",
Dec  1 05:14:51 np0005540825 optimistic_rosalind[261784]:                "ceph.encrypted": "0",
Dec  1 05:14:51 np0005540825 optimistic_rosalind[261784]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 05:14:51 np0005540825 optimistic_rosalind[261784]:                "ceph.osd_id": "1",
Dec  1 05:14:51 np0005540825 optimistic_rosalind[261784]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 05:14:51 np0005540825 optimistic_rosalind[261784]:                "ceph.type": "block",
Dec  1 05:14:51 np0005540825 optimistic_rosalind[261784]:                "ceph.vdo": "0",
Dec  1 05:14:51 np0005540825 optimistic_rosalind[261784]:                "ceph.with_tpm": "0"
Dec  1 05:14:51 np0005540825 optimistic_rosalind[261784]:            },
Dec  1 05:14:51 np0005540825 optimistic_rosalind[261784]:            "type": "block",
Dec  1 05:14:51 np0005540825 optimistic_rosalind[261784]:            "vg_name": "ceph_vg0"
Dec  1 05:14:51 np0005540825 optimistic_rosalind[261784]:        }
Dec  1 05:14:51 np0005540825 optimistic_rosalind[261784]:    ]
Dec  1 05:14:51 np0005540825 optimistic_rosalind[261784]: }
Dec  1 05:14:52 np0005540825 systemd[1]: libpod-29edfcaaa8d29aedf107f5cb1136873a194ba48be0322a416ff3f2f3c3ca23ed.scope: Deactivated successfully.
Dec  1 05:14:52 np0005540825 podman[261765]: 2025-12-01 10:14:52.014165369 +0000 UTC m=+0.509310608 container died 29edfcaaa8d29aedf107f5cb1136873a194ba48be0322a416ff3f2f3c3ca23ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_rosalind, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec  1 05:14:52 np0005540825 systemd[1]: var-lib-containers-storage-overlay-1543de7a627bfbd1fd48ecfa77ea207fb988f1c5b1191efcb8a8061f36252ca6-merged.mount: Deactivated successfully.
Dec  1 05:14:52 np0005540825 podman[261765]: 2025-12-01 10:14:52.064565431 +0000 UTC m=+0.559710670 container remove 29edfcaaa8d29aedf107f5cb1136873a194ba48be0322a416ff3f2f3c3ca23ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_rosalind, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  1 05:14:52 np0005540825 systemd[1]: libpod-conmon-29edfcaaa8d29aedf107f5cb1136873a194ba48be0322a416ff3f2f3c3ca23ed.scope: Deactivated successfully.
Dec  1 05:14:52 np0005540825 nova_compute[256151]: 2025-12-01 10:14:52.599 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:14:52 np0005540825 nova_compute[256151]: 2025-12-01 10:14:52.600 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:14:52 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v724: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1012 B/s rd, 276 B/s wr, 1 op/s
Dec  1 05:14:52 np0005540825 podman[261899]: 2025-12-01 10:14:52.720558764 +0000 UTC m=+0.042152070 container create f9271a7eea04ece2b32f0e056686a3105c43a917ed9350bdc9b922bcb8667a65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_blackwell, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:14:52 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:14:52.733 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4d9738cf-2abf-48e2-9303-677669784912, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:14:52 np0005540825 systemd[1]: Started libpod-conmon-f9271a7eea04ece2b32f0e056686a3105c43a917ed9350bdc9b922bcb8667a65.scope.
Dec  1 05:14:52 np0005540825 podman[261899]: 2025-12-01 10:14:52.700331687 +0000 UTC m=+0.021924973 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:14:52 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:14:52 np0005540825 podman[261899]: 2025-12-01 10:14:52.826175929 +0000 UTC m=+0.147769225 container init f9271a7eea04ece2b32f0e056686a3105c43a917ed9350bdc9b922bcb8667a65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:14:52 np0005540825 podman[261899]: 2025-12-01 10:14:52.835040399 +0000 UTC m=+0.156633685 container start f9271a7eea04ece2b32f0e056686a3105c43a917ed9350bdc9b922bcb8667a65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  1 05:14:52 np0005540825 podman[261899]: 2025-12-01 10:14:52.838596545 +0000 UTC m=+0.160189841 container attach f9271a7eea04ece2b32f0e056686a3105c43a917ed9350bdc9b922bcb8667a65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_blackwell, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:14:52 np0005540825 condescending_blackwell[261915]: 167 167
Dec  1 05:14:52 np0005540825 systemd[1]: libpod-f9271a7eea04ece2b32f0e056686a3105c43a917ed9350bdc9b922bcb8667a65.scope: Deactivated successfully.
Dec  1 05:14:52 np0005540825 conmon[261915]: conmon f9271a7eea04ece2b32f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f9271a7eea04ece2b32f0e056686a3105c43a917ed9350bdc9b922bcb8667a65.scope/container/memory.events
Dec  1 05:14:52 np0005540825 podman[261899]: 2025-12-01 10:14:52.843503708 +0000 UTC m=+0.165097014 container died f9271a7eea04ece2b32f0e056686a3105c43a917ed9350bdc9b922bcb8667a65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:14:52 np0005540825 systemd[1]: var-lib-containers-storage-overlay-079cda59a71e9e423dfb8883b70820733bae4240662cc4b565b7975c2fdf9ba7-merged.mount: Deactivated successfully.
Dec  1 05:14:52 np0005540825 podman[261899]: 2025-12-01 10:14:52.883806477 +0000 UTC m=+0.205399773 container remove f9271a7eea04ece2b32f0e056686a3105c43a917ed9350bdc9b922bcb8667a65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  1 05:14:52 np0005540825 systemd[1]: libpod-conmon-f9271a7eea04ece2b32f0e056686a3105c43a917ed9350bdc9b922bcb8667a65.scope: Deactivated successfully.
Dec  1 05:14:53 np0005540825 podman[261938]: 2025-12-01 10:14:53.063181466 +0000 UTC m=+0.049194091 container create d056607dba81c92b581f6d6c7535d5728bb304c976360fed0d42fbf7f69a2648 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_sutherland, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  1 05:14:53 np0005540825 systemd[1]: Started libpod-conmon-d056607dba81c92b581f6d6c7535d5728bb304c976360fed0d42fbf7f69a2648.scope.
Dec  1 05:14:53 np0005540825 podman[261938]: 2025-12-01 10:14:53.040812031 +0000 UTC m=+0.026824646 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:14:53 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:14:53 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42cc02edae515310db8e3d448585f4a7cefd57530f70a12ede17a5617c8f25f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:14:53 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42cc02edae515310db8e3d448585f4a7cefd57530f70a12ede17a5617c8f25f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:14:53 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42cc02edae515310db8e3d448585f4a7cefd57530f70a12ede17a5617c8f25f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:14:53 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42cc02edae515310db8e3d448585f4a7cefd57530f70a12ede17a5617c8f25f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:14:53 np0005540825 podman[261938]: 2025-12-01 10:14:53.170949529 +0000 UTC m=+0.156962174 container init d056607dba81c92b581f6d6c7535d5728bb304c976360fed0d42fbf7f69a2648 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_sutherland, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  1 05:14:53 np0005540825 podman[261938]: 2025-12-01 10:14:53.182584964 +0000 UTC m=+0.168597589 container start d056607dba81c92b581f6d6c7535d5728bb304c976360fed0d42fbf7f69a2648 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_sutherland, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:14:53 np0005540825 podman[261938]: 2025-12-01 10:14:53.196936462 +0000 UTC m=+0.182949097 container attach d056607dba81c92b581f6d6c7535d5728bb304c976360fed0d42fbf7f69a2648 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:14:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:14:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:14:53.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:14:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:14:53.624Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:14:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:14:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:14:53.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:14:53 np0005540825 lvm[262037]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 05:14:53 np0005540825 lvm[262037]: VG ceph_vg0 finished
Dec  1 05:14:53 np0005540825 angry_sutherland[261954]: {}
Dec  1 05:14:53 np0005540825 podman[262030]: 2025-12-01 10:14:53.980657567 +0000 UTC m=+0.081761101 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 05:14:53 np0005540825 systemd[1]: libpod-d056607dba81c92b581f6d6c7535d5728bb304c976360fed0d42fbf7f69a2648.scope: Deactivated successfully.
Dec  1 05:14:53 np0005540825 systemd[1]: libpod-d056607dba81c92b581f6d6c7535d5728bb304c976360fed0d42fbf7f69a2648.scope: Consumed 1.365s CPU time.
Dec  1 05:14:53 np0005540825 podman[261938]: 2025-12-01 10:14:53.985842947 +0000 UTC m=+0.971855582 container died d056607dba81c92b581f6d6c7535d5728bb304c976360fed0d42fbf7f69a2648 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  1 05:14:54 np0005540825 systemd[1]: var-lib-containers-storage-overlay-42cc02edae515310db8e3d448585f4a7cefd57530f70a12ede17a5617c8f25f1-merged.mount: Deactivated successfully.
Dec  1 05:14:54 np0005540825 podman[261938]: 2025-12-01 10:14:54.03888275 +0000 UTC m=+1.024895345 container remove d056607dba81c92b581f6d6c7535d5728bb304c976360fed0d42fbf7f69a2648 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid)
Dec  1 05:14:54 np0005540825 systemd[1]: libpod-conmon-d056607dba81c92b581f6d6c7535d5728bb304c976360fed0d42fbf7f69a2648.scope: Deactivated successfully.
Dec  1 05:14:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 05:14:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:14:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 05:14:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:14:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/101454 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 05:14:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:14:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:14:54 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v725: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1012 B/s rd, 276 B/s wr, 1 op/s
Dec  1 05:14:55 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:14:55 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:14:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:14:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:14:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:14:55.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:14:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:14:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:14:55.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:14:56 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v726: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 276 B/s wr, 1 op/s
Dec  1 05:14:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:14:57.179Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:14:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:14:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:14:57.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:14:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:14:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:14:57.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:14:58 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v727: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 184 B/s rd, 0 op/s
Dec  1 05:14:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:14:58.992Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:14:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:14:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:14:59.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:14:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:14:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:14:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:14:59.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:15:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:15:00 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Scheduled restart job, restart counter is at 10.
Dec  1 05:15:00 np0005540825 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.pytvsu for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 05:15:00 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Consumed 1.561s CPU time.
Dec  1 05:15:00 np0005540825 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.pytvsu for 365f19c2-81e5-5edd-b6b4-280555214d3a...
Dec  1 05:15:00 np0005540825 podman[262145]: 2025-12-01 10:15:00.626372199 +0000 UTC m=+0.055683626 container create 175072eb9ad8288754525f1835b155d486baa9b9919fdcbe6ed4f80c20993ee5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  1 05:15:00 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a9f823e5f78d38f77547afe72b9f24b6a0fcaa37b3b2d117d60ff427085d7cd/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec  1 05:15:00 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a9f823e5f78d38f77547afe72b9f24b6a0fcaa37b3b2d117d60ff427085d7cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:15:00 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a9f823e5f78d38f77547afe72b9f24b6a0fcaa37b3b2d117d60ff427085d7cd/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:15:00 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a9f823e5f78d38f77547afe72b9f24b6a0fcaa37b3b2d117d60ff427085d7cd/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.pytvsu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:15:00 np0005540825 podman[262145]: 2025-12-01 10:15:00.697263555 +0000 UTC m=+0.126574992 container init 175072eb9ad8288754525f1835b155d486baa9b9919fdcbe6ed4f80c20993ee5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec  1 05:15:00 np0005540825 podman[262145]: 2025-12-01 10:15:00.599594485 +0000 UTC m=+0.028905952 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:15:00 np0005540825 podman[262145]: 2025-12-01 10:15:00.705920029 +0000 UTC m=+0.135231456 container start 175072eb9ad8288754525f1835b155d486baa9b9919fdcbe6ed4f80c20993ee5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:15:00 np0005540825 bash[262145]: 175072eb9ad8288754525f1835b155d486baa9b9919fdcbe6ed4f80c20993ee5
Dec  1 05:15:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:00 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec  1 05:15:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:00 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec  1 05:15:00 np0005540825 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.pytvsu for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 05:15:00 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v728: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  1 05:15:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:00 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec  1 05:15:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:00 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec  1 05:15:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:00 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec  1 05:15:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:00 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec  1 05:15:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:00 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec  1 05:15:00 np0005540825 nova_compute[256151]: 2025-12-01 10:15:00.794 256155 DEBUG oslo_concurrency.lockutils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:15:00 np0005540825 nova_compute[256151]: 2025-12-01 10:15:00.795 256155 DEBUG oslo_concurrency.lockutils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:15:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:00 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:15:00 np0005540825 nova_compute[256151]: 2025-12-01 10:15:00.823 256155 DEBUG nova.compute.manager [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 05:15:01 np0005540825 nova_compute[256151]: 2025-12-01 10:15:01.297 256155 DEBUG oslo_concurrency.lockutils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:15:01 np0005540825 nova_compute[256151]: 2025-12-01 10:15:01.298 256155 DEBUG oslo_concurrency.lockutils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:15:01 np0005540825 nova_compute[256151]: 2025-12-01 10:15:01.313 256155 DEBUG nova.virt.hardware [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 05:15:01 np0005540825 nova_compute[256151]: 2025-12-01 10:15:01.313 256155 INFO nova.compute.claims [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 05:15:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:15:01] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Dec  1 05:15:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:15:01] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Dec  1 05:15:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:15:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:15:01.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:15:01 np0005540825 nova_compute[256151]: 2025-12-01 10:15:01.520 256155 DEBUG oslo_concurrency.processutils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:15:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:15:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:15:01.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:15:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:15:01 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1912336479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:15:01 np0005540825 nova_compute[256151]: 2025-12-01 10:15:01.962 256155 DEBUG oslo_concurrency.processutils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:15:01 np0005540825 nova_compute[256151]: 2025-12-01 10:15:01.972 256155 DEBUG nova.compute.provider_tree [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Inventory has not changed in ProviderTree for provider: 5efe20fe-1981-4bd9-8786-d9fddc89a5ae update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 05:15:01 np0005540825 nova_compute[256151]: 2025-12-01 10:15:01.993 256155 DEBUG nova.scheduler.client.report [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Inventory has not changed for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 05:15:02 np0005540825 nova_compute[256151]: 2025-12-01 10:15:02.033 256155 DEBUG oslo_concurrency.lockutils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:15:02 np0005540825 nova_compute[256151]: 2025-12-01 10:15:02.034 256155 DEBUG nova.compute.manager [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 05:15:02 np0005540825 nova_compute[256151]: 2025-12-01 10:15:02.100 256155 DEBUG nova.compute.manager [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 05:15:02 np0005540825 nova_compute[256151]: 2025-12-01 10:15:02.101 256155 DEBUG nova.network.neutron [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 05:15:02 np0005540825 nova_compute[256151]: 2025-12-01 10:15:02.150 256155 INFO nova.virt.libvirt.driver [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 05:15:02 np0005540825 nova_compute[256151]: 2025-12-01 10:15:02.173 256155 DEBUG nova.compute.manager [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 05:15:02 np0005540825 nova_compute[256151]: 2025-12-01 10:15:02.288 256155 DEBUG nova.compute.manager [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 05:15:02 np0005540825 nova_compute[256151]: 2025-12-01 10:15:02.290 256155 DEBUG nova.virt.libvirt.driver [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 05:15:02 np0005540825 nova_compute[256151]: 2025-12-01 10:15:02.291 256155 INFO nova.virt.libvirt.driver [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Creating image(s)#033[00m
Dec  1 05:15:02 np0005540825 nova_compute[256151]: 2025-12-01 10:15:02.332 256155 DEBUG nova.storage.rbd_utils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] rbd image 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  1 05:15:02 np0005540825 nova_compute[256151]: 2025-12-01 10:15:02.375 256155 DEBUG nova.storage.rbd_utils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] rbd image 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  1 05:15:02 np0005540825 nova_compute[256151]: 2025-12-01 10:15:02.410 256155 DEBUG nova.storage.rbd_utils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] rbd image 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  1 05:15:02 np0005540825 nova_compute[256151]: 2025-12-01 10:15:02.414 256155 DEBUG oslo_concurrency.lockutils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "caad95fa2cc8ed03bed2e9851744954b07ec7b34" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:15:02 np0005540825 nova_compute[256151]: 2025-12-01 10:15:02.415 256155 DEBUG oslo_concurrency.lockutils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "caad95fa2cc8ed03bed2e9851744954b07ec7b34" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:15:02 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v729: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  1 05:15:03 np0005540825 nova_compute[256151]: 2025-12-01 10:15:03.165 256155 DEBUG nova.virt.libvirt.imagebackend [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Image locations are: [{'url': 'rbd://365f19c2-81e5-5edd-b6b4-280555214d3a/images/8f75d6de-6ce0-44e1-b417-d0111424475b/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://365f19c2-81e5-5edd-b6b4-280555214d3a/images/8f75d6de-6ce0-44e1-b417-d0111424475b/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Dec  1 05:15:03 np0005540825 podman[262280]: 2025-12-01 10:15:03.2492684 +0000 UTC m=+0.108316439 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=ovn_controller)
Dec  1 05:15:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:15:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:15:03.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:15:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:15:03.624Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:15:03 np0005540825 nova_compute[256151]: 2025-12-01 10:15:03.665 256155 WARNING oslo_policy.policy [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Dec  1 05:15:03 np0005540825 nova_compute[256151]: 2025-12-01 10:15:03.666 256155 WARNING oslo_policy.policy [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Dec  1 05:15:03 np0005540825 nova_compute[256151]: 2025-12-01 10:15:03.671 256155 DEBUG nova.policy [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5b56a238daf0445798410e51caada0ff', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9f6be4e572624210b91193c011607c08', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  1 05:15:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:15:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:15:03.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:15:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:04.570 163291 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:15:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:04.571 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:15:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:04.571 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:15:04 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v730: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec  1 05:15:05 np0005540825 nova_compute[256151]: 2025-12-01 10:15:05.122 256155 DEBUG oslo_concurrency.processutils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/caad95fa2cc8ed03bed2e9851744954b07ec7b34.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:15:05 np0005540825 nova_compute[256151]: 2025-12-01 10:15:05.206 256155 DEBUG oslo_concurrency.processutils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/caad95fa2cc8ed03bed2e9851744954b07ec7b34.part --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:15:05 np0005540825 nova_compute[256151]: 2025-12-01 10:15:05.208 256155 DEBUG nova.virt.images [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] 8f75d6de-6ce0-44e1-b417-d0111424475b was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec  1 05:15:05 np0005540825 nova_compute[256151]: 2025-12-01 10:15:05.209 256155 DEBUG nova.privsep.utils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  1 05:15:05 np0005540825 nova_compute[256151]: 2025-12-01 10:15:05.209 256155 DEBUG oslo_concurrency.processutils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/caad95fa2cc8ed03bed2e9851744954b07ec7b34.part /var/lib/nova/instances/_base/caad95fa2cc8ed03bed2e9851744954b07ec7b34.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:15:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:15:05 np0005540825 nova_compute[256151]: 2025-12-01 10:15:05.448 256155 DEBUG oslo_concurrency.processutils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/caad95fa2cc8ed03bed2e9851744954b07ec7b34.part /var/lib/nova/instances/_base/caad95fa2cc8ed03bed2e9851744954b07ec7b34.converted" returned: 0 in 0.238s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:15:05 np0005540825 nova_compute[256151]: 2025-12-01 10:15:05.457 256155 DEBUG oslo_concurrency.processutils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/caad95fa2cc8ed03bed2e9851744954b07ec7b34.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:15:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:15:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:15:05.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:15:05 np0005540825 nova_compute[256151]: 2025-12-01 10:15:05.541 256155 DEBUG oslo_concurrency.processutils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/caad95fa2cc8ed03bed2e9851744954b07ec7b34.converted --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:15:05 np0005540825 nova_compute[256151]: 2025-12-01 10:15:05.544 256155 DEBUG oslo_concurrency.lockutils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "caad95fa2cc8ed03bed2e9851744954b07ec7b34" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 3.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:15:05 np0005540825 nova_compute[256151]: 2025-12-01 10:15:05.585 256155 DEBUG nova.storage.rbd_utils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] rbd image 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  1 05:15:05 np0005540825 nova_compute[256151]: 2025-12-01 10:15:05.590 256155 DEBUG oslo_concurrency.processutils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/caad95fa2cc8ed03bed2e9851744954b07ec7b34 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:15:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:15:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:15:05.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:15:05 np0005540825 nova_compute[256151]: 2025-12-01 10:15:05.942 256155 DEBUG nova.network.neutron [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Successfully created port: f76722ac-216e-4706-9ca6-804d90bbbc7f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  1 05:15:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Dec  1 05:15:06 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Dec  1 05:15:06 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Dec  1 05:15:06 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v732: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 716 B/s wr, 10 op/s
Dec  1 05:15:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:06 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:15:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:06 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:15:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:15:07.180Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:15:07 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Dec  1 05:15:07 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Dec  1 05:15:07 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Dec  1 05:15:07 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:07 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:15:07 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:15:07.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:15:07 np0005540825 nova_compute[256151]: 2025-12-01 10:15:07.586 256155 DEBUG nova.network.neutron [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Successfully updated port: f76722ac-216e-4706-9ca6-804d90bbbc7f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 05:15:07 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:07 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:15:07 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:15:07.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:15:08 np0005540825 nova_compute[256151]: 2025-12-01 10:15:08.020 256155 DEBUG oslo_concurrency.processutils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/caad95fa2cc8ed03bed2e9851744954b07ec7b34 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:15:08 np0005540825 nova_compute[256151]: 2025-12-01 10:15:08.106 256155 DEBUG nova.storage.rbd_utils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] resizing rbd image 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  1 05:15:08 np0005540825 nova_compute[256151]: 2025-12-01 10:15:08.477 256155 DEBUG nova.objects.instance [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lazy-loading 'migration_context' on Instance uuid 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 05:15:08 np0005540825 nova_compute[256151]: 2025-12-01 10:15:08.496 256155 DEBUG oslo_concurrency.lockutils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "refresh_cache-60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 05:15:08 np0005540825 nova_compute[256151]: 2025-12-01 10:15:08.496 256155 DEBUG oslo_concurrency.lockutils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquired lock "refresh_cache-60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 05:15:08 np0005540825 nova_compute[256151]: 2025-12-01 10:15:08.497 256155 DEBUG nova.network.neutron [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 05:15:08 np0005540825 nova_compute[256151]: 2025-12-01 10:15:08.534 256155 DEBUG nova.virt.libvirt.driver [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 05:15:08 np0005540825 nova_compute[256151]: 2025-12-01 10:15:08.534 256155 DEBUG nova.virt.libvirt.driver [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Ensure instance console log exists: /var/lib/nova/instances/60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 05:15:08 np0005540825 nova_compute[256151]: 2025-12-01 10:15:08.535 256155 DEBUG oslo_concurrency.lockutils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:15:08 np0005540825 nova_compute[256151]: 2025-12-01 10:15:08.535 256155 DEBUG oslo_concurrency.lockutils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:15:08 np0005540825 nova_compute[256151]: 2025-12-01 10:15:08.535 256155 DEBUG oslo_concurrency.lockutils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:15:08 np0005540825 nova_compute[256151]: 2025-12-01 10:15:08.635 256155 DEBUG nova.compute.manager [req-3719add3-9c39-4744-81bd-b649b733434d req-d290c666-4e6a-44c7-8ba7-3e051cb3c93f dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Received event network-changed-f76722ac-216e-4706-9ca6-804d90bbbc7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:15:08 np0005540825 nova_compute[256151]: 2025-12-01 10:15:08.635 256155 DEBUG nova.compute.manager [req-3719add3-9c39-4744-81bd-b649b733434d req-d290c666-4e6a-44c7-8ba7-3e051cb3c93f dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Refreshing instance network info cache due to event network-changed-f76722ac-216e-4706-9ca6-804d90bbbc7f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 05:15:08 np0005540825 nova_compute[256151]: 2025-12-01 10:15:08.636 256155 DEBUG oslo_concurrency.lockutils [req-3719add3-9c39-4744-81bd-b649b733434d req-d290c666-4e6a-44c7-8ba7-3e051cb3c93f dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "refresh_cache-60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 05:15:08 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v734: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 895 B/s wr, 12 op/s
Dec  1 05:15:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:15:08.993Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:15:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:15:08.994Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:15:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:15:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:15:09.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:15:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:15:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:15:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:15:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:15:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:15:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:15:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:15:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:15:09 np0005540825 nova_compute[256151]: 2025-12-01 10:15:09.641 256155 DEBUG nova.network.neutron [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 05:15:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:15:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:15:09.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:15:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:15:10 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v735: 353 pgs: 353 active+clean; 72 MiB data, 209 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.3 MiB/s wr, 36 op/s
Dec  1 05:15:10 np0005540825 nova_compute[256151]: 2025-12-01 10:15:10.795 256155 DEBUG nova.network.neutron [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Updating instance_info_cache with network_info: [{"id": "f76722ac-216e-4706-9ca6-804d90bbbc7f", "address": "fa:16:3e:64:86:43", "network": {"id": "8c466ba6-3850-4dac-846e-cf97ed839b53", "bridge": "br-int", "label": "tempest-network-smoke--1786448833", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf76722ac-21", "ovs_interfaceid": "f76722ac-216e-4706-9ca6-804d90bbbc7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 05:15:10 np0005540825 nova_compute[256151]: 2025-12-01 10:15:10.831 256155 DEBUG oslo_concurrency.lockutils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Releasing lock "refresh_cache-60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 05:15:10 np0005540825 nova_compute[256151]: 2025-12-01 10:15:10.832 256155 DEBUG nova.compute.manager [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Instance network_info: |[{"id": "f76722ac-216e-4706-9ca6-804d90bbbc7f", "address": "fa:16:3e:64:86:43", "network": {"id": "8c466ba6-3850-4dac-846e-cf97ed839b53", "bridge": "br-int", "label": "tempest-network-smoke--1786448833", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf76722ac-21", "ovs_interfaceid": "f76722ac-216e-4706-9ca6-804d90bbbc7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 05:15:10 np0005540825 nova_compute[256151]: 2025-12-01 10:15:10.833 256155 DEBUG oslo_concurrency.lockutils [req-3719add3-9c39-4744-81bd-b649b733434d req-d290c666-4e6a-44c7-8ba7-3e051cb3c93f dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquired lock "refresh_cache-60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 05:15:10 np0005540825 nova_compute[256151]: 2025-12-01 10:15:10.833 256155 DEBUG nova.network.neutron [req-3719add3-9c39-4744-81bd-b649b733434d req-d290c666-4e6a-44c7-8ba7-3e051cb3c93f dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Refreshing network info cache for port f76722ac-216e-4706-9ca6-804d90bbbc7f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 05:15:10 np0005540825 nova_compute[256151]: 2025-12-01 10:15:10.839 256155 DEBUG nova.virt.libvirt.driver [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Start _get_guest_xml network_info=[{"id": "f76722ac-216e-4706-9ca6-804d90bbbc7f", "address": "fa:16:3e:64:86:43", "network": {"id": "8c466ba6-3850-4dac-846e-cf97ed839b53", "bridge": "br-int", "label": "tempest-network-smoke--1786448833", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf76722ac-21", "ovs_interfaceid": "f76722ac-216e-4706-9ca6-804d90bbbc7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T10:14:19Z,direct_url=<?>,disk_format='qcow2',id=8f75d6de-6ce0-44e1-b417-d0111424475b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9a5734898a6345909986f17ddf57b27d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T10:14:22Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': '8f75d6de-6ce0-44e1-b417-d0111424475b'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 05:15:10 np0005540825 nova_compute[256151]: 2025-12-01 10:15:10.848 256155 WARNING nova.virt.libvirt.driver [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 05:15:10 np0005540825 nova_compute[256151]: 2025-12-01 10:15:10.854 256155 DEBUG nova.virt.libvirt.host [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 05:15:10 np0005540825 nova_compute[256151]: 2025-12-01 10:15:10.855 256155 DEBUG nova.virt.libvirt.host [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 05:15:10 np0005540825 nova_compute[256151]: 2025-12-01 10:15:10.863 256155 DEBUG nova.virt.libvirt.host [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 05:15:10 np0005540825 nova_compute[256151]: 2025-12-01 10:15:10.864 256155 DEBUG nova.virt.libvirt.host [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 05:15:10 np0005540825 nova_compute[256151]: 2025-12-01 10:15:10.864 256155 DEBUG nova.virt.libvirt.driver [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 05:15:10 np0005540825 nova_compute[256151]: 2025-12-01 10:15:10.865 256155 DEBUG nova.virt.hardware [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T10:14:17Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2e731827-1896-49cd-b0cc-12903555d217',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T10:14:19Z,direct_url=<?>,disk_format='qcow2',id=8f75d6de-6ce0-44e1-b417-d0111424475b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9a5734898a6345909986f17ddf57b27d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T10:14:22Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 05:15:10 np0005540825 nova_compute[256151]: 2025-12-01 10:15:10.865 256155 DEBUG nova.virt.hardware [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 05:15:10 np0005540825 nova_compute[256151]: 2025-12-01 10:15:10.866 256155 DEBUG nova.virt.hardware [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 05:15:10 np0005540825 nova_compute[256151]: 2025-12-01 10:15:10.866 256155 DEBUG nova.virt.hardware [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 05:15:10 np0005540825 nova_compute[256151]: 2025-12-01 10:15:10.866 256155 DEBUG nova.virt.hardware [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 05:15:10 np0005540825 nova_compute[256151]: 2025-12-01 10:15:10.866 256155 DEBUG nova.virt.hardware [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 05:15:10 np0005540825 nova_compute[256151]: 2025-12-01 10:15:10.867 256155 DEBUG nova.virt.hardware [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 05:15:10 np0005540825 nova_compute[256151]: 2025-12-01 10:15:10.867 256155 DEBUG nova.virt.hardware [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 05:15:10 np0005540825 nova_compute[256151]: 2025-12-01 10:15:10.867 256155 DEBUG nova.virt.hardware [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 05:15:10 np0005540825 nova_compute[256151]: 2025-12-01 10:15:10.867 256155 DEBUG nova.virt.hardware [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 05:15:10 np0005540825 nova_compute[256151]: 2025-12-01 10:15:10.868 256155 DEBUG nova.virt.hardware [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 05:15:10 np0005540825 nova_compute[256151]: 2025-12-01 10:15:10.871 256155 DEBUG nova.privsep.utils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  1 05:15:10 np0005540825 nova_compute[256151]: 2025-12-01 10:15:10.872 256155 DEBUG oslo_concurrency.processutils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:15:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:15:11] "GET /metrics HTTP/1.1" 200 48482 "" "Prometheus/2.51.0"
Dec  1 05:15:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:15:11] "GET /metrics HTTP/1.1" 200 48482 "" "Prometheus/2.51.0"
Dec  1 05:15:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  1 05:15:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2391718927' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  1 05:15:11 np0005540825 nova_compute[256151]: 2025-12-01 10:15:11.414 256155 DEBUG oslo_concurrency.processutils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:15:11 np0005540825 nova_compute[256151]: 2025-12-01 10:15:11.442 256155 DEBUG nova.storage.rbd_utils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] rbd image 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  1 05:15:11 np0005540825 nova_compute[256151]: 2025-12-01 10:15:11.446 256155 DEBUG oslo_concurrency.processutils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:15:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:15:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:15:11.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:15:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.002000054s ======
Dec  1 05:15:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:15:11.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec  1 05:15:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  1 05:15:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/629626178' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  1 05:15:11 np0005540825 nova_compute[256151]: 2025-12-01 10:15:11.933 256155 DEBUG oslo_concurrency.processutils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:15:11 np0005540825 nova_compute[256151]: 2025-12-01 10:15:11.935 256155 DEBUG nova.virt.libvirt.vif [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T10:14:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1016315753',display_name='tempest-TestNetworkBasicOps-server-1016315753',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1016315753',id=1,image_ref='8f75d6de-6ce0-44e1-b417-d0111424475b',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGJyiZhD79g//PFP56TQaBy3YxEM3LBaA7EcVZ7Tdz/6gMAGTnZhgjP7lR7qjlPZM7TMPAJaWDsBbZE4mpPdHpXPHvYJjJulnETj6bgJEdlnDSD6q5Pc5uIGO8IM6SZd+A==',key_name='tempest-TestNetworkBasicOps-370696341',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9f6be4e572624210b91193c011607c08',ramdisk_id='',reservation_id='r-a9okexes',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8f75d6de-6ce0-44e1-b417-d0111424475b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1248115384',owner_user_name='tempest-TestNetworkBasicOps-1248115384-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T10:15:02Z,user_data=None,user_id='5b56a238daf0445798410e51caada0ff',uuid=60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f76722ac-216e-4706-9ca6-804d90bbbc7f", "address": "fa:16:3e:64:86:43", "network": {"id": "8c466ba6-3850-4dac-846e-cf97ed839b53", "bridge": "br-int", "label": "tempest-network-smoke--1786448833", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf76722ac-21", "ovs_interfaceid": "f76722ac-216e-4706-9ca6-804d90bbbc7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 05:15:11 np0005540825 nova_compute[256151]: 2025-12-01 10:15:11.936 256155 DEBUG nova.network.os_vif_util [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Converting VIF {"id": "f76722ac-216e-4706-9ca6-804d90bbbc7f", "address": "fa:16:3e:64:86:43", "network": {"id": "8c466ba6-3850-4dac-846e-cf97ed839b53", "bridge": "br-int", "label": "tempest-network-smoke--1786448833", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf76722ac-21", "ovs_interfaceid": "f76722ac-216e-4706-9ca6-804d90bbbc7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 05:15:11 np0005540825 nova_compute[256151]: 2025-12-01 10:15:11.937 256155 DEBUG nova.network.os_vif_util [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:86:43,bridge_name='br-int',has_traffic_filtering=True,id=f76722ac-216e-4706-9ca6-804d90bbbc7f,network=Network(8c466ba6-3850-4dac-846e-cf97ed839b53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf76722ac-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 05:15:11 np0005540825 nova_compute[256151]: 2025-12-01 10:15:11.940 256155 DEBUG nova.objects.instance [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lazy-loading 'pci_devices' on Instance uuid 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 05:15:11 np0005540825 nova_compute[256151]: 2025-12-01 10:15:11.959 256155 DEBUG nova.virt.libvirt.driver [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] End _get_guest_xml xml=<domain type="kvm">
Dec  1 05:15:11 np0005540825 nova_compute[256151]:  <uuid>60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20</uuid>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:  <name>instance-00000001</name>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:  <memory>131072</memory>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:  <vcpu>1</vcpu>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:  <metadata>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 05:15:11 np0005540825 nova_compute[256151]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:      <nova:name>tempest-TestNetworkBasicOps-server-1016315753</nova:name>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:      <nova:creationTime>2025-12-01 10:15:10</nova:creationTime>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:      <nova:flavor name="m1.nano">
Dec  1 05:15:11 np0005540825 nova_compute[256151]:        <nova:memory>128</nova:memory>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:        <nova:disk>1</nova:disk>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:        <nova:swap>0</nova:swap>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:        <nova:ephemeral>0</nova:ephemeral>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:        <nova:vcpus>1</nova:vcpus>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:      </nova:flavor>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:      <nova:owner>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:        <nova:user uuid="5b56a238daf0445798410e51caada0ff">tempest-TestNetworkBasicOps-1248115384-project-member</nova:user>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:        <nova:project uuid="9f6be4e572624210b91193c011607c08">tempest-TestNetworkBasicOps-1248115384</nova:project>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:      </nova:owner>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:      <nova:root type="image" uuid="8f75d6de-6ce0-44e1-b417-d0111424475b"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:      <nova:ports>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:        <nova:port uuid="f76722ac-216e-4706-9ca6-804d90bbbc7f">
Dec  1 05:15:11 np0005540825 nova_compute[256151]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:        </nova:port>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:      </nova:ports>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    </nova:instance>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:  </metadata>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:  <sysinfo type="smbios">
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <system>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:      <entry name="manufacturer">RDO</entry>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:      <entry name="product">OpenStack Compute</entry>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:      <entry name="serial">60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20</entry>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:      <entry name="uuid">60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20</entry>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:      <entry name="family">Virtual Machine</entry>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    </system>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:  </sysinfo>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:  <os>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <boot dev="hd"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <smbios mode="sysinfo"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:  </os>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:  <features>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <acpi/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <apic/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <vmcoreinfo/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:  </features>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:  <clock offset="utc">
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <timer name="hpet" present="no"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:  </clock>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:  <cpu mode="host-model" match="exact">
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:  </cpu>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:  <devices>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <disk type="network" device="disk">
Dec  1 05:15:11 np0005540825 nova_compute[256151]:      <driver type="raw" cache="none"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:      <source protocol="rbd" name="vms/60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20_disk">
Dec  1 05:15:11 np0005540825 nova_compute[256151]:        <host name="192.168.122.100" port="6789"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:        <host name="192.168.122.102" port="6789"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:        <host name="192.168.122.101" port="6789"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:      </source>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:      <auth username="openstack">
Dec  1 05:15:11 np0005540825 nova_compute[256151]:        <secret type="ceph" uuid="365f19c2-81e5-5edd-b6b4-280555214d3a"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:      </auth>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:      <target dev="vda" bus="virtio"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    </disk>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <disk type="network" device="cdrom">
Dec  1 05:15:11 np0005540825 nova_compute[256151]:      <driver type="raw" cache="none"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:      <source protocol="rbd" name="vms/60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20_disk.config">
Dec  1 05:15:11 np0005540825 nova_compute[256151]:        <host name="192.168.122.100" port="6789"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:        <host name="192.168.122.102" port="6789"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:        <host name="192.168.122.101" port="6789"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:      </source>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:      <auth username="openstack">
Dec  1 05:15:11 np0005540825 nova_compute[256151]:        <secret type="ceph" uuid="365f19c2-81e5-5edd-b6b4-280555214d3a"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:      </auth>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:      <target dev="sda" bus="sata"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    </disk>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <interface type="ethernet">
Dec  1 05:15:11 np0005540825 nova_compute[256151]:      <mac address="fa:16:3e:64:86:43"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:      <model type="virtio"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:      <mtu size="1442"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:      <target dev="tapf76722ac-21"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    </interface>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <serial type="pty">
Dec  1 05:15:11 np0005540825 nova_compute[256151]:      <log file="/var/lib/nova/instances/60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20/console.log" append="off"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    </serial>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <video>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:      <model type="virtio"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    </video>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <input type="tablet" bus="usb"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <rng model="virtio">
Dec  1 05:15:11 np0005540825 nova_compute[256151]:      <backend model="random">/dev/urandom</backend>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    </rng>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <controller type="usb" index="0"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    <memballoon model="virtio">
Dec  1 05:15:11 np0005540825 nova_compute[256151]:      <stats period="10"/>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:    </memballoon>
Dec  1 05:15:11 np0005540825 nova_compute[256151]:  </devices>
Dec  1 05:15:11 np0005540825 nova_compute[256151]: </domain>
Dec  1 05:15:11 np0005540825 nova_compute[256151]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 05:15:11 np0005540825 nova_compute[256151]: 2025-12-01 10:15:11.960 256155 DEBUG nova.compute.manager [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Preparing to wait for external event network-vif-plugged-f76722ac-216e-4706-9ca6-804d90bbbc7f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 05:15:11 np0005540825 nova_compute[256151]: 2025-12-01 10:15:11.961 256155 DEBUG oslo_concurrency.lockutils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:15:11 np0005540825 nova_compute[256151]: 2025-12-01 10:15:11.961 256155 DEBUG oslo_concurrency.lockutils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:15:11 np0005540825 nova_compute[256151]: 2025-12-01 10:15:11.962 256155 DEBUG oslo_concurrency.lockutils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:15:11 np0005540825 nova_compute[256151]: 2025-12-01 10:15:11.963 256155 DEBUG nova.virt.libvirt.vif [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T10:14:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1016315753',display_name='tempest-TestNetworkBasicOps-server-1016315753',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1016315753',id=1,image_ref='8f75d6de-6ce0-44e1-b417-d0111424475b',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGJyiZhD79g//PFP56TQaBy3YxEM3LBaA7EcVZ7Tdz/6gMAGTnZhgjP7lR7qjlPZM7TMPAJaWDsBbZE4mpPdHpXPHvYJjJulnETj6bgJEdlnDSD6q5Pc5uIGO8IM6SZd+A==',key_name='tempest-TestNetworkBasicOps-370696341',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9f6be4e572624210b91193c011607c08',ramdisk_id='',reservation_id='r-a9okexes',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8f75d6de-6ce0-44e1-b417-d0111424475b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1248115384',owner_user_name='tempest-TestNetworkBasicOps-1248115384-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T10:15:02Z,user_data=None,user_id='5b56a238daf0445798410e51caada0ff',uuid=60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f76722ac-216e-4706-9ca6-804d90bbbc7f", "address": "fa:16:3e:64:86:43", "network": {"id": "8c466ba6-3850-4dac-846e-cf97ed839b53", "bridge": "br-int", "label": "tempest-network-smoke--1786448833", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf76722ac-21", "ovs_interfaceid": "f76722ac-216e-4706-9ca6-804d90bbbc7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 05:15:11 np0005540825 nova_compute[256151]: 2025-12-01 10:15:11.963 256155 DEBUG nova.network.os_vif_util [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Converting VIF {"id": "f76722ac-216e-4706-9ca6-804d90bbbc7f", "address": "fa:16:3e:64:86:43", "network": {"id": "8c466ba6-3850-4dac-846e-cf97ed839b53", "bridge": "br-int", "label": "tempest-network-smoke--1786448833", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf76722ac-21", "ovs_interfaceid": "f76722ac-216e-4706-9ca6-804d90bbbc7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 05:15:11 np0005540825 nova_compute[256151]: 2025-12-01 10:15:11.964 256155 DEBUG nova.network.os_vif_util [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:86:43,bridge_name='br-int',has_traffic_filtering=True,id=f76722ac-216e-4706-9ca6-804d90bbbc7f,network=Network(8c466ba6-3850-4dac-846e-cf97ed839b53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf76722ac-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 05:15:11 np0005540825 nova_compute[256151]: 2025-12-01 10:15:11.965 256155 DEBUG os_vif [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:86:43,bridge_name='br-int',has_traffic_filtering=True,id=f76722ac-216e-4706-9ca6-804d90bbbc7f,network=Network(8c466ba6-3850-4dac-846e-cf97ed839b53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf76722ac-21') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 05:15:12 np0005540825 nova_compute[256151]: 2025-12-01 10:15:12.014 256155 DEBUG ovsdbapp.backend.ovs_idl [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  1 05:15:12 np0005540825 nova_compute[256151]: 2025-12-01 10:15:12.014 256155 DEBUG ovsdbapp.backend.ovs_idl [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  1 05:15:12 np0005540825 nova_compute[256151]: 2025-12-01 10:15:12.014 256155 DEBUG ovsdbapp.backend.ovs_idl [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  1 05:15:12 np0005540825 nova_compute[256151]: 2025-12-01 10:15:12.015 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  1 05:15:12 np0005540825 nova_compute[256151]: 2025-12-01 10:15:12.016 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [POLLOUT] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:15:12 np0005540825 nova_compute[256151]: 2025-12-01 10:15:12.016 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  1 05:15:12 np0005540825 nova_compute[256151]: 2025-12-01 10:15:12.016 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:15:12 np0005540825 nova_compute[256151]: 2025-12-01 10:15:12.018 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:15:12 np0005540825 nova_compute[256151]: 2025-12-01 10:15:12.020 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:15:12 np0005540825 nova_compute[256151]: 2025-12-01 10:15:12.033 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:15:12 np0005540825 nova_compute[256151]: 2025-12-01 10:15:12.033 256155 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:15:12 np0005540825 nova_compute[256151]: 2025-12-01 10:15:12.034 256155 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 05:15:12 np0005540825 nova_compute[256151]: 2025-12-01 10:15:12.035 256155 INFO oslo.privsep.daemon [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpxjo3huhi/privsep.sock']#033[00m
Dec  1 05:15:12 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v736: 353 pgs: 353 active+clean; 88 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 56 op/s
Dec  1 05:15:12 np0005540825 nova_compute[256151]: 2025-12-01 10:15:12.793 256155 INFO oslo.privsep.daemon [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Dec  1 05:15:12 np0005540825 nova_compute[256151]: 2025-12-01 10:15:12.671 262532 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  1 05:15:12 np0005540825 nova_compute[256151]: 2025-12-01 10:15:12.678 262532 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  1 05:15:12 np0005540825 nova_compute[256151]: 2025-12-01 10:15:12.682 262532 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Dec  1 05:15:12 np0005540825 nova_compute[256151]: 2025-12-01 10:15:12.682 262532 INFO oslo.privsep.daemon [-] privsep daemon running as pid 262532#033[00m
Dec  1 05:15:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:13 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  1 05:15:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:13 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec  1 05:15:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:13 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec  1 05:15:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:13 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec  1 05:15:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:13 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec  1 05:15:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:13 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec  1 05:15:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:13 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec  1 05:15:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:13 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  1 05:15:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:13 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  1 05:15:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:13 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  1 05:15:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:13 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec  1 05:15:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:13 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  1 05:15:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:13 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec  1 05:15:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:13 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec  1 05:15:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:13 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec  1 05:15:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:13 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec  1 05:15:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:13 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec  1 05:15:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:13 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec  1 05:15:13 np0005540825 nova_compute[256151]: 2025-12-01 10:15:13.048 256155 DEBUG nova.network.neutron [req-3719add3-9c39-4744-81bd-b649b733434d req-d290c666-4e6a-44c7-8ba7-3e051cb3c93f dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Updated VIF entry in instance network info cache for port f76722ac-216e-4706-9ca6-804d90bbbc7f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 05:15:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:13 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec  1 05:15:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:13 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec  1 05:15:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:13 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec  1 05:15:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:13 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec  1 05:15:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:13 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec  1 05:15:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:13 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec  1 05:15:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:13 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  1 05:15:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:13 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec  1 05:15:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:13 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  1 05:15:13 np0005540825 nova_compute[256151]: 2025-12-01 10:15:13.049 256155 DEBUG nova.network.neutron [req-3719add3-9c39-4744-81bd-b649b733434d req-d290c666-4e6a-44c7-8ba7-3e051cb3c93f dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Updating instance_info_cache with network_info: [{"id": "f76722ac-216e-4706-9ca6-804d90bbbc7f", "address": "fa:16:3e:64:86:43", "network": {"id": "8c466ba6-3850-4dac-846e-cf97ed839b53", "bridge": "br-int", "label": "tempest-network-smoke--1786448833", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf76722ac-21", "ovs_interfaceid": "f76722ac-216e-4706-9ca6-804d90bbbc7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 05:15:13 np0005540825 nova_compute[256151]: 2025-12-01 10:15:13.068 256155 DEBUG oslo_concurrency.lockutils [req-3719add3-9c39-4744-81bd-b649b733434d req-d290c666-4e6a-44c7-8ba7-3e051cb3c93f dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Releasing lock "refresh_cache-60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 05:15:13 np0005540825 nova_compute[256151]: 2025-12-01 10:15:13.092 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:15:13 np0005540825 nova_compute[256151]: 2025-12-01 10:15:13.092 256155 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf76722ac-21, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:15:13 np0005540825 nova_compute[256151]: 2025-12-01 10:15:13.094 256155 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf76722ac-21, col_values=(('external_ids', {'iface-id': 'f76722ac-216e-4706-9ca6-804d90bbbc7f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:64:86:43', 'vm-uuid': '60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:15:13 np0005540825 nova_compute[256151]: 2025-12-01 10:15:13.096 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:15:13 np0005540825 NetworkManager[48963]: <info>  [1764584113.0972] manager: (tapf76722ac-21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Dec  1 05:15:13 np0005540825 nova_compute[256151]: 2025-12-01 10:15:13.100 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 05:15:13 np0005540825 nova_compute[256151]: 2025-12-01 10:15:13.106 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:15:13 np0005540825 nova_compute[256151]: 2025-12-01 10:15:13.108 256155 INFO os_vif [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:86:43,bridge_name='br-int',has_traffic_filtering=True,id=f76722ac-216e-4706-9ca6-804d90bbbc7f,network=Network(8c466ba6-3850-4dac-846e-cf97ed839b53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf76722ac-21')#033[00m
Dec  1 05:15:13 np0005540825 nova_compute[256151]: 2025-12-01 10:15:13.215 256155 DEBUG nova.virt.libvirt.driver [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 05:15:13 np0005540825 nova_compute[256151]: 2025-12-01 10:15:13.216 256155 DEBUG nova.virt.libvirt.driver [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 05:15:13 np0005540825 nova_compute[256151]: 2025-12-01 10:15:13.216 256155 DEBUG nova.virt.libvirt.driver [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] No VIF found with MAC fa:16:3e:64:86:43, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 05:15:13 np0005540825 nova_compute[256151]: 2025-12-01 10:15:13.217 256155 INFO nova.virt.libvirt.driver [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Using config drive#033[00m
Dec  1 05:15:13 np0005540825 nova_compute[256151]: 2025-12-01 10:15:13.260 256155 DEBUG nova.storage.rbd_utils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] rbd image 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  1 05:15:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:15:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:15:13.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:15:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:15:13.626Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:15:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:15:13.627Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:15:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:13 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efccc000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:15:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:15:13.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:15:13 np0005540825 nova_compute[256151]: 2025-12-01 10:15:13.915 256155 INFO nova.virt.libvirt.driver [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Creating config drive at /var/lib/nova/instances/60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20/disk.config#033[00m
Dec  1 05:15:13 np0005540825 nova_compute[256151]: 2025-12-01 10:15:13.924 256155 DEBUG oslo_concurrency.processutils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoteucjmm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:15:14 np0005540825 nova_compute[256151]: 2025-12-01 10:15:14.072 256155 DEBUG oslo_concurrency.processutils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoteucjmm" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:15:14 np0005540825 nova_compute[256151]: 2025-12-01 10:15:14.104 256155 DEBUG nova.storage.rbd_utils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] rbd image 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  1 05:15:14 np0005540825 nova_compute[256151]: 2025-12-01 10:15:14.108 256155 DEBUG oslo_concurrency.processutils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20/disk.config 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:15:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:14 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcb40016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:14 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca4000b60 fd 50 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:14 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v737: 353 pgs: 353 active+clean; 88 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.5 MiB/s wr, 40 op/s
Dec  1 05:15:14 np0005540825 nova_compute[256151]: 2025-12-01 10:15:14.750 256155 DEBUG oslo_concurrency.processutils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20/disk.config 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:15:14 np0005540825 nova_compute[256151]: 2025-12-01 10:15:14.751 256155 INFO nova.virt.libvirt.driver [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Deleting local config drive /var/lib/nova/instances/60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20/disk.config because it was imported into RBD.#033[00m
Dec  1 05:15:14 np0005540825 systemd[1]: Starting libvirt secret daemon...
Dec  1 05:15:14 np0005540825 systemd[1]: Started libvirt secret daemon.
Dec  1 05:15:14 np0005540825 nova_compute[256151]: 2025-12-01 10:15:14.870 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:15:14 np0005540825 kernel: tun: Universal TUN/TAP device driver, 1.6
Dec  1 05:15:14 np0005540825 kernel: tapf76722ac-21: entered promiscuous mode
Dec  1 05:15:14 np0005540825 NetworkManager[48963]: <info>  [1764584114.9239] manager: (tapf76722ac-21): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Dec  1 05:15:14 np0005540825 ovn_controller[153404]: 2025-12-01T10:15:14Z|00027|binding|INFO|Claiming lport f76722ac-216e-4706-9ca6-804d90bbbc7f for this chassis.
Dec  1 05:15:14 np0005540825 ovn_controller[153404]: 2025-12-01T10:15:14Z|00028|binding|INFO|f76722ac-216e-4706-9ca6-804d90bbbc7f: Claiming fa:16:3e:64:86:43 10.100.0.13
Dec  1 05:15:14 np0005540825 nova_compute[256151]: 2025-12-01 10:15:14.924 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:15:14 np0005540825 nova_compute[256151]: 2025-12-01 10:15:14.930 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:15:14 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:14.940 163291 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:86:43 10.100.0.13'], port_security=['fa:16:3e:64:86:43 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c466ba6-3850-4dac-846e-cf97ed839b53', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f6be4e572624210b91193c011607c08', 'neutron:revision_number': '2', 'neutron:security_group_ids': '10e0d4a2-5f12-4bc6-a3e3-16e6e801f68c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c323932-e602-4ad2-aee6-0c52ba24fdb8, chassis=[<ovs.db.idl.Row object at 0x7f3429b436d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3429b436d0>], logical_port=f76722ac-216e-4706-9ca6-804d90bbbc7f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 05:15:14 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:14.942 163291 INFO neutron.agent.ovn.metadata.agent [-] Port f76722ac-216e-4706-9ca6-804d90bbbc7f in datapath 8c466ba6-3850-4dac-846e-cf97ed839b53 bound to our chassis#033[00m
Dec  1 05:15:14 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:14.944 163291 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8c466ba6-3850-4dac-846e-cf97ed839b53#033[00m
Dec  1 05:15:14 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:14.945 163291 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpiq3cz70z/privsep.sock']#033[00m
Dec  1 05:15:14 np0005540825 systemd-machined[216307]: New machine qemu-1-instance-00000001.
Dec  1 05:15:14 np0005540825 systemd-udevd[262650]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 05:15:15 np0005540825 nova_compute[256151]: 2025-12-01 10:15:15.019 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:15:15 np0005540825 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Dec  1 05:15:15 np0005540825 NetworkManager[48963]: <info>  [1764584115.0278] device (tapf76722ac-21): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 05:15:15 np0005540825 NetworkManager[48963]: <info>  [1764584115.0287] device (tapf76722ac-21): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 05:15:15 np0005540825 ovn_controller[153404]: 2025-12-01T10:15:15Z|00029|binding|INFO|Setting lport f76722ac-216e-4706-9ca6-804d90bbbc7f ovn-installed in OVS
Dec  1 05:15:15 np0005540825 ovn_controller[153404]: 2025-12-01T10:15:15Z|00030|binding|INFO|Setting lport f76722ac-216e-4706-9ca6-804d90bbbc7f up in Southbound
Dec  1 05:15:15 np0005540825 nova_compute[256151]: 2025-12-01 10:15:15.029 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:15:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:15:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Dec  1 05:15:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Dec  1 05:15:15 np0005540825 ceph-mon[74416]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Dec  1 05:15:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:15:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:15:15.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:15:15 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:15.703 163291 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec  1 05:15:15 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:15.705 163291 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpiq3cz70z/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec  1 05:15:15 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:15.579 262668 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  1 05:15:15 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:15.586 262668 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  1 05:15:15 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:15.591 262668 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Dec  1 05:15:15 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:15.592 262668 INFO oslo.privsep.daemon [-] privsep daemon running as pid 262668#033[00m
Dec  1 05:15:15 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:15.709 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[ba4e35b6-a0ff-44a7-8446-35ea5afe22b5]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:15:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:15 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca8000d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:15:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:15:15.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:15:16 np0005540825 nova_compute[256151]: 2025-12-01 10:15:16.073 256155 DEBUG nova.compute.manager [req-03dfad2c-4866-4a95-9fb5-a403c71ca6b3 req-ac0dabca-d021-4f3c-b677-2b9a981bfb11 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Received event network-vif-plugged-f76722ac-216e-4706-9ca6-804d90bbbc7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:15:16 np0005540825 nova_compute[256151]: 2025-12-01 10:15:16.073 256155 DEBUG oslo_concurrency.lockutils [req-03dfad2c-4866-4a95-9fb5-a403c71ca6b3 req-ac0dabca-d021-4f3c-b677-2b9a981bfb11 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:15:16 np0005540825 nova_compute[256151]: 2025-12-01 10:15:16.074 256155 DEBUG oslo_concurrency.lockutils [req-03dfad2c-4866-4a95-9fb5-a403c71ca6b3 req-ac0dabca-d021-4f3c-b677-2b9a981bfb11 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:15:16 np0005540825 nova_compute[256151]: 2025-12-01 10:15:16.074 256155 DEBUG oslo_concurrency.lockutils [req-03dfad2c-4866-4a95-9fb5-a403c71ca6b3 req-ac0dabca-d021-4f3c-b677-2b9a981bfb11 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:15:16 np0005540825 nova_compute[256151]: 2025-12-01 10:15:16.074 256155 DEBUG nova.compute.manager [req-03dfad2c-4866-4a95-9fb5-a403c71ca6b3 req-ac0dabca-d021-4f3c-b677-2b9a981bfb11 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Processing event network-vif-plugged-f76722ac-216e-4706-9ca6-804d90bbbc7f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 05:15:16 np0005540825 nova_compute[256151]: 2025-12-01 10:15:16.118 256155 DEBUG nova.compute.manager [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 05:15:16 np0005540825 nova_compute[256151]: 2025-12-01 10:15:16.119 256155 DEBUG nova.virt.driver [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] Emitting event <LifecycleEvent: 1764584116.1190324, 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 05:15:16 np0005540825 nova_compute[256151]: 2025-12-01 10:15:16.119 256155 INFO nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] VM Started (Lifecycle Event)#033[00m
Dec  1 05:15:16 np0005540825 nova_compute[256151]: 2025-12-01 10:15:16.122 256155 DEBUG nova.virt.libvirt.driver [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 05:15:16 np0005540825 nova_compute[256151]: 2025-12-01 10:15:16.137 256155 INFO nova.virt.libvirt.driver [-] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Instance spawned successfully.#033[00m
Dec  1 05:15:16 np0005540825 nova_compute[256151]: 2025-12-01 10:15:16.137 256155 DEBUG nova.virt.libvirt.driver [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 05:15:16 np0005540825 nova_compute[256151]: 2025-12-01 10:15:16.166 256155 DEBUG nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 05:15:16 np0005540825 nova_compute[256151]: 2025-12-01 10:15:16.173 256155 DEBUG nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 05:15:16 np0005540825 nova_compute[256151]: 2025-12-01 10:15:16.177 256155 DEBUG nova.virt.libvirt.driver [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 05:15:16 np0005540825 nova_compute[256151]: 2025-12-01 10:15:16.177 256155 DEBUG nova.virt.libvirt.driver [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 05:15:16 np0005540825 nova_compute[256151]: 2025-12-01 10:15:16.178 256155 DEBUG nova.virt.libvirt.driver [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 05:15:16 np0005540825 nova_compute[256151]: 2025-12-01 10:15:16.178 256155 DEBUG nova.virt.libvirt.driver [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 05:15:16 np0005540825 nova_compute[256151]: 2025-12-01 10:15:16.179 256155 DEBUG nova.virt.libvirt.driver [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 05:15:16 np0005540825 nova_compute[256151]: 2025-12-01 10:15:16.179 256155 DEBUG nova.virt.libvirt.driver [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 05:15:16 np0005540825 nova_compute[256151]: 2025-12-01 10:15:16.212 256155 INFO nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 05:15:16 np0005540825 nova_compute[256151]: 2025-12-01 10:15:16.213 256155 DEBUG nova.virt.driver [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] Emitting event <LifecycleEvent: 1764584116.1192176, 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 05:15:16 np0005540825 nova_compute[256151]: 2025-12-01 10:15:16.213 256155 INFO nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] VM Paused (Lifecycle Event)#033[00m
Dec  1 05:15:16 np0005540825 nova_compute[256151]: 2025-12-01 10:15:16.245 256155 DEBUG nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 05:15:16 np0005540825 nova_compute[256151]: 2025-12-01 10:15:16.248 256155 DEBUG nova.virt.driver [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] Emitting event <LifecycleEvent: 1764584116.1214712, 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 05:15:16 np0005540825 nova_compute[256151]: 2025-12-01 10:15:16.249 256155 INFO nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] VM Resumed (Lifecycle Event)#033[00m
Dec  1 05:15:16 np0005540825 nova_compute[256151]: 2025-12-01 10:15:16.257 256155 INFO nova.compute.manager [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Took 13.97 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 05:15:16 np0005540825 nova_compute[256151]: 2025-12-01 10:15:16.258 256155 DEBUG nova.compute.manager [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 05:15:16 np0005540825 nova_compute[256151]: 2025-12-01 10:15:16.267 256155 DEBUG nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 05:15:16 np0005540825 nova_compute[256151]: 2025-12-01 10:15:16.269 256155 DEBUG nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 05:15:16 np0005540825 nova_compute[256151]: 2025-12-01 10:15:16.308 256155 INFO nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 05:15:16 np0005540825 nova_compute[256151]: 2025-12-01 10:15:16.332 256155 INFO nova.compute.manager [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Took 15.35 seconds to build instance.#033[00m
Dec  1 05:15:16 np0005540825 nova_compute[256151]: 2025-12-01 10:15:16.349 256155 DEBUG oslo_concurrency.lockutils [None req-96f552f6-2579-4ecd-9ba5-9f709a477550 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.554s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:15:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/101516 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  1 05:15:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:16 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efc9c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:16 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcb40016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:16 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:16.496 262668 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:15:16 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:16.496 262668 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:15:16 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:16.496 262668 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:15:16 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v739: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.3 MiB/s wr, 43 op/s
Dec  1 05:15:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:15:17.189Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:15:17 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:17.320 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[949faec8-9494-449b-a36c-5bcc41c37852]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:15:17 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:17.321 163291 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8c466ba6-31 in ovnmeta-8c466ba6-3850-4dac-846e-cf97ed839b53 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  1 05:15:17 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:17.323 262668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8c466ba6-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  1 05:15:17 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:17.323 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[4ccd1dd4-5aa8-4928-bdf6-c4c6110d95f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:15:17 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:17.330 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[8438253c-c7f3-4de9-a731-cba9ec18a14e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:15:17 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:17.372 163408 DEBUG oslo.privsep.daemon [-] privsep: reply[5a542992-344a-483a-a919-fdba1f88f72e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:15:17 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:17.412 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[ad6e7dc1-542c-4099-90b4-306623d7e693]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:15:17 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:17.414 163291 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmphrgealyw/privsep.sock']#033[00m
Dec  1 05:15:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:15:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:15:17.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:15:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:17 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:15:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:15:17.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:15:18 np0005540825 nova_compute[256151]: 2025-12-01 10:15:18.096 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:15:18 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:18.151 163291 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec  1 05:15:18 np0005540825 nova_compute[256151]: 2025-12-01 10:15:18.153 256155 DEBUG nova.compute.manager [req-0552669b-ce86-4b2b-b113-467d22bf39dc req-ec48e9f8-09e4-4f92-b461-603f8964536f dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Received event network-vif-plugged-f76722ac-216e-4706-9ca6-804d90bbbc7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:15:18 np0005540825 nova_compute[256151]: 2025-12-01 10:15:18.153 256155 DEBUG oslo_concurrency.lockutils [req-0552669b-ce86-4b2b-b113-467d22bf39dc req-ec48e9f8-09e4-4f92-b461-603f8964536f dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:15:18 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:18.152 163291 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmphrgealyw/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec  1 05:15:18 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:18.043 262728 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  1 05:15:18 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:18.048 262728 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  1 05:15:18 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:18.051 262728 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Dec  1 05:15:18 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:18.051 262728 INFO oslo.privsep.daemon [-] privsep daemon running as pid 262728#033[00m
Dec  1 05:15:18 np0005540825 nova_compute[256151]: 2025-12-01 10:15:18.154 256155 DEBUG oslo_concurrency.lockutils [req-0552669b-ce86-4b2b-b113-467d22bf39dc req-ec48e9f8-09e4-4f92-b461-603f8964536f dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:15:18 np0005540825 nova_compute[256151]: 2025-12-01 10:15:18.154 256155 DEBUG oslo_concurrency.lockutils [req-0552669b-ce86-4b2b-b113-467d22bf39dc req-ec48e9f8-09e4-4f92-b461-603f8964536f dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:15:18 np0005540825 nova_compute[256151]: 2025-12-01 10:15:18.155 256155 DEBUG nova.compute.manager [req-0552669b-ce86-4b2b-b113-467d22bf39dc req-ec48e9f8-09e4-4f92-b461-603f8964536f dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] No waiting events found dispatching network-vif-plugged-f76722ac-216e-4706-9ca6-804d90bbbc7f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 05:15:18 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:18.155 262728 DEBUG oslo.privsep.daemon [-] privsep: reply[f818dc25-7a11-4069-9927-d4e67d16a85f]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:15:18 np0005540825 nova_compute[256151]: 2025-12-01 10:15:18.155 256155 WARNING nova.compute.manager [req-0552669b-ce86-4b2b-b113-467d22bf39dc req-ec48e9f8-09e4-4f92-b461-603f8964536f dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Received unexpected event network-vif-plugged-f76722ac-216e-4706-9ca6-804d90bbbc7f for instance with vm_state active and task_state None.#033[00m
Dec  1 05:15:18 np0005540825 podman[262729]: 2025-12-01 10:15:18.216179796 +0000 UTC m=+0.084529626 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent)
Dec  1 05:15:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:18 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca8001820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:18 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efc9c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:18 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:18.667 262728 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:15:18 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:18.668 262728 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:15:18 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:18.668 262728 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:15:18 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v740: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.1 MiB/s wr, 40 op/s
Dec  1 05:15:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:15:18.995Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:19.296 262728 DEBUG oslo.privsep.daemon [-] privsep: reply[d56fcc24-b31d-45ee-a860-176ff841e801]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:15:19 np0005540825 NetworkManager[48963]: <info>  [1764584119.3264] manager: (tap8c466ba6-30): new Veth device (/org/freedesktop/NetworkManager/Devices/25)
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:19.325 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[1f03060f-7af6-4a3b-8f62-2426baa51a78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:15:19 np0005540825 systemd-udevd[262758]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:19.367 262728 DEBUG oslo.privsep.daemon [-] privsep: reply[ea7e102d-778e-4f55-89f3-c737d96bacdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:19.374 262728 DEBUG oslo.privsep.daemon [-] privsep: reply[c88f4660-27de-4bd3-b024-d590a0e08e78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:15:19 np0005540825 NetworkManager[48963]: <info>  [1764584119.4156] device (tap8c466ba6-30): carrier: link connected
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:19.423 262728 DEBUG oslo.privsep.daemon [-] privsep: reply[31a6bd83-3dbd-4e00-8d46-ab431c96bf3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:19.449 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[9e7fb363-3309-4635-b3cc-b8e9c0f64e36]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8c466ba6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:20:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 398804, 'reachable_time': 40260, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262776, 'error': None, 'target': 'ovnmeta-8c466ba6-3850-4dac-846e-cf97ed839b53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:19.469 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[8e71d8d1-ca35-44c4-82b4-1648b5c8f7fc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2d:20bf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 398804, 'tstamp': 398804}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262777, 'error': None, 'target': 'ovnmeta-8c466ba6-3850-4dac-846e-cf97ed839b53', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:19.492 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[74d7d3ca-05d7-4612-9e84-388b09118fca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8c466ba6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:20:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 398804, 'reachable_time': 40260, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 262779, 'error': None, 'target': 'ovnmeta-8c466ba6-3850-4dac-846e-cf97ed839b53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:15:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:15:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:15:19.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:19.537 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[64b8bc85-f141-41d5-adc7-37ca04d5968c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:19.620 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[a9e70b07-6e37-4ab5-8164-fd144f805249]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:19.623 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c466ba6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:19.624 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:19.625 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8c466ba6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:15:19 np0005540825 nova_compute[256151]: 2025-12-01 10:15:19.628 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:15:19 np0005540825 kernel: tap8c466ba6-30: entered promiscuous mode
Dec  1 05:15:19 np0005540825 NetworkManager[48963]: <info>  [1764584119.6290] manager: (tap8c466ba6-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Dec  1 05:15:19 np0005540825 nova_compute[256151]: 2025-12-01 10:15:19.631 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:19.639 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8c466ba6-30, col_values=(('external_ids', {'iface-id': 'd9b22cb4-2520-4db5-9f61-76a8a39f3543'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:15:19 np0005540825 nova_compute[256151]: 2025-12-01 10:15:19.641 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:15:19 np0005540825 ovn_controller[153404]: 2025-12-01T10:15:19Z|00031|binding|INFO|Releasing lport d9b22cb4-2520-4db5-9f61-76a8a39f3543 from this chassis (sb_readonly=0)
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:19.644 163291 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8c466ba6-3850-4dac-846e-cf97ed839b53.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8c466ba6-3850-4dac-846e-cf97ed839b53.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:19.645 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[5ec1db72-144a-4f14-9035-cfd734b99639]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:19.647 163291 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]: global
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]:    log         /dev/log local0 debug
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]:    log-tag     haproxy-metadata-proxy-8c466ba6-3850-4dac-846e-cf97ed839b53
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]:    user        root
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]:    group       root
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]:    maxconn     1024
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]:    pidfile     /var/lib/neutron/external/pids/8c466ba6-3850-4dac-846e-cf97ed839b53.pid.haproxy
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]:    daemon
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]: 
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]: defaults
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]:    log global
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]:    mode http
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]:    option httplog
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]:    option dontlognull
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]:    option http-server-close
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]:    option forwardfor
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]:    retries                 3
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]:    timeout http-request    30s
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]:    timeout connect         30s
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]:    timeout client          32s
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]:    timeout server          32s
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]:    timeout http-keep-alive 30s
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]: 
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]: 
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]: listen listener
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]:    bind 169.254.169.254:80
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]:    server metadata /var/lib/neutron/metadata_proxy
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]:    http-request add-header X-OVN-Network-ID 8c466ba6-3850-4dac-846e-cf97ed839b53
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  1 05:15:19 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:19.648 163291 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8c466ba6-3850-4dac-846e-cf97ed839b53', 'env', 'PROCESS_TAG=haproxy-8c466ba6-3850-4dac-846e-cf97ed839b53', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8c466ba6-3850-4dac-846e-cf97ed839b53.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  1 05:15:19 np0005540825 nova_compute[256151]: 2025-12-01 10:15:19.670 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:15:19 np0005540825 ovn_controller[153404]: 2025-12-01T10:15:19Z|00032|binding|INFO|Releasing lport d9b22cb4-2520-4db5-9f61-76a8a39f3543 from this chassis (sb_readonly=0)
Dec  1 05:15:19 np0005540825 nova_compute[256151]: 2025-12-01 10:15:19.753 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:15:19 np0005540825 NetworkManager[48963]: <info>  [1764584119.7539] manager: (patch-provnet-da274a4a-a49c-4f01-b728-391696cd2672-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/27)
Dec  1 05:15:19 np0005540825 NetworkManager[48963]: <info>  [1764584119.7543] device (patch-provnet-da274a4a-a49c-4f01-b728-391696cd2672-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 05:15:19 np0005540825 NetworkManager[48963]: <info>  [1764584119.7554] manager: (patch-br-int-to-provnet-da274a4a-a49c-4f01-b728-391696cd2672): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/28)
Dec  1 05:15:19 np0005540825 NetworkManager[48963]: <info>  [1764584119.7558] device (patch-br-int-to-provnet-da274a4a-a49c-4f01-b728-391696cd2672)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 05:15:19 np0005540825 NetworkManager[48963]: <info>  [1764584119.7566] manager: (patch-br-int-to-provnet-da274a4a-a49c-4f01-b728-391696cd2672): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Dec  1 05:15:19 np0005540825 NetworkManager[48963]: <info>  [1764584119.7571] manager: (patch-provnet-da274a4a-a49c-4f01-b728-391696cd2672-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Dec  1 05:15:19 np0005540825 NetworkManager[48963]: <info>  [1764584119.7575] device (patch-provnet-da274a4a-a49c-4f01-b728-391696cd2672-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec  1 05:15:19 np0005540825 NetworkManager[48963]: <info>  [1764584119.7577] device (patch-br-int-to-provnet-da274a4a-a49c-4f01-b728-391696cd2672)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec  1 05:15:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:19 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcb40016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:19 np0005540825 ovn_controller[153404]: 2025-12-01T10:15:19Z|00033|binding|INFO|Releasing lport d9b22cb4-2520-4db5-9f61-76a8a39f3543 from this chassis (sb_readonly=0)
Dec  1 05:15:19 np0005540825 nova_compute[256151]: 2025-12-01 10:15:19.826 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:15:19 np0005540825 nova_compute[256151]: 2025-12-01 10:15:19.832 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:15:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:15:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:15:19.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:15:19 np0005540825 nova_compute[256151]: 2025-12-01 10:15:19.871 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:15:20 np0005540825 podman[262814]: 2025-12-01 10:15:20.143931616 +0000 UTC m=+0.072349437 container create a8de317abab3b907b179154214398ee2bcc6f956c79762c847194cc6890af8fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c466ba6-3850-4dac-846e-cf97ed839b53, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  1 05:15:20 np0005540825 nova_compute[256151]: 2025-12-01 10:15:20.191 256155 DEBUG nova.compute.manager [req-62166943-5a1b-41e0-8781-2f5c132b3f10 req-a519cf89-9d43-46eb-9c3e-2fbc153c4185 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Received event network-changed-f76722ac-216e-4706-9ca6-804d90bbbc7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:15:20 np0005540825 podman[262814]: 2025-12-01 10:15:20.102760163 +0000 UTC m=+0.031178074 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 05:15:20 np0005540825 nova_compute[256151]: 2025-12-01 10:15:20.193 256155 DEBUG nova.compute.manager [req-62166943-5a1b-41e0-8781-2f5c132b3f10 req-a519cf89-9d43-46eb-9c3e-2fbc153c4185 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Refreshing instance network info cache due to event network-changed-f76722ac-216e-4706-9ca6-804d90bbbc7f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 05:15:20 np0005540825 nova_compute[256151]: 2025-12-01 10:15:20.193 256155 DEBUG oslo_concurrency.lockutils [req-62166943-5a1b-41e0-8781-2f5c132b3f10 req-a519cf89-9d43-46eb-9c3e-2fbc153c4185 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "refresh_cache-60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 05:15:20 np0005540825 nova_compute[256151]: 2025-12-01 10:15:20.193 256155 DEBUG oslo_concurrency.lockutils [req-62166943-5a1b-41e0-8781-2f5c132b3f10 req-a519cf89-9d43-46eb-9c3e-2fbc153c4185 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquired lock "refresh_cache-60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 05:15:20 np0005540825 nova_compute[256151]: 2025-12-01 10:15:20.193 256155 DEBUG nova.network.neutron [req-62166943-5a1b-41e0-8781-2f5c132b3f10 req-a519cf89-9d43-46eb-9c3e-2fbc153c4185 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Refreshing network info cache for port f76722ac-216e-4706-9ca6-804d90bbbc7f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 05:15:20 np0005540825 systemd[1]: Started libpod-conmon-a8de317abab3b907b179154214398ee2bcc6f956c79762c847194cc6890af8fc.scope.
Dec  1 05:15:20 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:15:20 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c583df219ed25fc32e87c072f5ce968ee8854ed2108a4c80ec9ecdaddac8339f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  1 05:15:20 np0005540825 podman[262814]: 2025-12-01 10:15:20.24472638 +0000 UTC m=+0.173144241 container init a8de317abab3b907b179154214398ee2bcc6f956c79762c847194cc6890af8fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c466ba6-3850-4dac-846e-cf97ed839b53, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  1 05:15:20 np0005540825 podman[262814]: 2025-12-01 10:15:20.254967647 +0000 UTC m=+0.183385498 container start a8de317abab3b907b179154214398ee2bcc6f956c79762c847194cc6890af8fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c466ba6-3850-4dac-846e-cf97ed839b53, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  1 05:15:20 np0005540825 neutron-haproxy-ovnmeta-8c466ba6-3850-4dac-846e-cf97ed839b53[262829]: [NOTICE]   (262833) : New worker (262835) forked
Dec  1 05:15:20 np0005540825 neutron-haproxy-ovnmeta-8c466ba6-3850-4dac-846e-cf97ed839b53[262829]: [NOTICE]   (262833) : Loading success.
Dec  1 05:15:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:15:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:20 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:20 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca8001820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:20 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v741: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.1 MiB/s wr, 66 op/s
Dec  1 05:15:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:15:21] "GET /metrics HTTP/1.1" 200 48482 "" "Prometheus/2.51.0"
Dec  1 05:15:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:15:21] "GET /metrics HTTP/1.1" 200 48482 "" "Prometheus/2.51.0"
Dec  1 05:15:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:15:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:15:21.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:15:21 np0005540825 nova_compute[256151]: 2025-12-01 10:15:21.743 256155 DEBUG nova.network.neutron [req-62166943-5a1b-41e0-8781-2f5c132b3f10 req-a519cf89-9d43-46eb-9c3e-2fbc153c4185 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Updated VIF entry in instance network info cache for port f76722ac-216e-4706-9ca6-804d90bbbc7f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 05:15:21 np0005540825 nova_compute[256151]: 2025-12-01 10:15:21.743 256155 DEBUG nova.network.neutron [req-62166943-5a1b-41e0-8781-2f5c132b3f10 req-a519cf89-9d43-46eb-9c3e-2fbc153c4185 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Updating instance_info_cache with network_info: [{"id": "f76722ac-216e-4706-9ca6-804d90bbbc7f", "address": "fa:16:3e:64:86:43", "network": {"id": "8c466ba6-3850-4dac-846e-cf97ed839b53", "bridge": "br-int", "label": "tempest-network-smoke--1786448833", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf76722ac-21", "ovs_interfaceid": "f76722ac-216e-4706-9ca6-804d90bbbc7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 05:15:21 np0005540825 nova_compute[256151]: 2025-12-01 10:15:21.765 256155 DEBUG oslo_concurrency.lockutils [req-62166943-5a1b-41e0-8781-2f5c132b3f10 req-a519cf89-9d43-46eb-9c3e-2fbc153c4185 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Releasing lock "refresh_cache-60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 05:15:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:21 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efc9c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:15:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:15:21.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:15:22 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:22 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcb40016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:22 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:22 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:22 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v742: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Dec  1 05:15:23 np0005540825 nova_compute[256151]: 2025-12-01 10:15:23.098 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:15:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:15:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:15:23.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:15:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:15:23.628Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:15:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:15:23.629Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:15:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:15:23.630Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:15:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:23 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca8001820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:15:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:15:23.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:15:24 np0005540825 podman[262873]: 2025-12-01 10:15:24.230169093 +0000 UTC m=+0.092459340 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  1 05:15:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:24 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efc9c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:24 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcb40016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:15:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:15:24 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v743: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Dec  1 05:15:24 np0005540825 nova_compute[256151]: 2025-12-01 10:15:24.873 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:15:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:15:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:15:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:15:25.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:15:25 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:25 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca4002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:15:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:15:25.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:15:26 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:26 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca8002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:26 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:26 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efc9c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:26 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v744: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 78 op/s
Dec  1 05:15:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:15:27.191Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:15:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:15:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:15:27.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:15:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:27 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcb40016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:15:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:15:27.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:15:28 np0005540825 nova_compute[256151]: 2025-12-01 10:15:28.101 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:15:28 np0005540825 ceph-osd[82809]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec  1 05:15:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:28 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca4002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:28 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca8002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:28 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v745: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Dec  1 05:15:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:15:28.996Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:15:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:15:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:15:29.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:15:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:29 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efc9c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:15:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:15:29.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:15:29 np0005540825 nova_compute[256151]: 2025-12-01 10:15:29.925 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:15:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:15:30 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:30 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcb40035c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:30 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:30 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca4002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:30 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v746: 353 pgs: 353 active+clean; 120 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 116 op/s
Dec  1 05:15:30 np0005540825 ovn_controller[153404]: 2025-12-01T10:15:30Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:64:86:43 10.100.0.13
Dec  1 05:15:30 np0005540825 ovn_controller[153404]: 2025-12-01T10:15:30Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:64:86:43 10.100.0.13
Dec  1 05:15:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:15:31] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec  1 05:15:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:15:31] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec  1 05:15:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:15:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:15:31.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:15:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:31 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca8002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:15:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:15:31.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:15:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:32 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efc9c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:32 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcb40035c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:32 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v747: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 97 op/s
Dec  1 05:15:33 np0005540825 nova_compute[256151]: 2025-12-01 10:15:33.103 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:15:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:15:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:15:33.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:15:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:15:33.631Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:15:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/101533 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 05:15:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:33 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:15:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:15:33.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:15:34 np0005540825 podman[262907]: 2025-12-01 10:15:34.266418067 +0000 UTC m=+0.129128052 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 05:15:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:34 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:34 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efc9c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:34 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v748: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 384 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec  1 05:15:34 np0005540825 nova_compute[256151]: 2025-12-01 10:15:34.963 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:15:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:15:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:15:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:15:35.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:15:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:35 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcb40035c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:15:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:15:35.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:15:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:36 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:36 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:36 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v749: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 384 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec  1 05:15:37 np0005540825 nova_compute[256151]: 2025-12-01 10:15:37.111 256155 INFO nova.compute.manager [None req-e633c147-6cc8-49f3-abde-ef36b60aee92 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Get console output#033[00m
Dec  1 05:15:37 np0005540825 nova_compute[256151]: 2025-12-01 10:15:37.117 256155 INFO oslo.privsep.daemon [None req-e633c147-6cc8-49f3-abde-ef36b60aee92 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpcn2jx469/privsep.sock']#033[00m
Dec  1 05:15:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:15:37.193Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:15:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:15:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:15:37.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:15:37 np0005540825 nova_compute[256151]: 2025-12-01 10:15:37.803 256155 INFO oslo.privsep.daemon [None req-e633c147-6cc8-49f3-abde-ef36b60aee92 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Dec  1 05:15:37 np0005540825 nova_compute[256151]: 2025-12-01 10:15:37.674 262942 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  1 05:15:37 np0005540825 nova_compute[256151]: 2025-12-01 10:15:37.679 262942 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  1 05:15:37 np0005540825 nova_compute[256151]: 2025-12-01 10:15:37.681 262942 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Dec  1 05:15:37 np0005540825 nova_compute[256151]: 2025-12-01 10:15:37.698 262942 INFO oslo.privsep.daemon [-] privsep daemon running as pid 262942#033[00m
Dec  1 05:15:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:37 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:15:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:15:37.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:15:37 np0005540825 nova_compute[256151]: 2025-12-01 10:15:37.911 262942 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  1 05:15:38 np0005540825 nova_compute[256151]: 2025-12-01 10:15:38.105 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:15:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:38 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:38 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc0000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:38 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v750: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 384 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec  1 05:15:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:15:38.997Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_10:15:39
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'backups', '.mgr', 'default.rgw.log', '.rgw.root', '.nfs', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images']
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 05:15:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:15:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:15:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:15:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:15:39.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:15:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:39 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 05:15:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:15:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:15:39.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007589550978381194 of space, bias 1.0, pg target 0.22768652935143582 quantized to 32 (current 32)
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:15:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:15:39 np0005540825 nova_compute[256151]: 2025-12-01 10:15:39.969 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:15:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:15:40 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:40 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:40 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:40 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcb40035c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:40 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v751: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec  1 05:15:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:15:41] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Dec  1 05:15:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:15:41] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Dec  1 05:15:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:15:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:15:41.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:15:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:41 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc0001930 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:15:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:15:41.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:15:42 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:42 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:42 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:42 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:42 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v752: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 316 KiB/s wr, 20 op/s
Dec  1 05:15:43 np0005540825 nova_compute[256151]: 2025-12-01 10:15:43.107 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:15:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:15:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:15:43.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:15:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:43 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:15:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:15:43.632Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:15:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:43 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcb40035c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:15:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:15:43.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:15:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:44 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc0001930 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:44 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:44 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v753: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 6.9 KiB/s rd, 16 KiB/s wr, 1 op/s
Dec  1 05:15:44 np0005540825 nova_compute[256151]: 2025-12-01 10:15:44.973 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:15:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:15:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:15:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:15:45.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:15:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:45 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:15:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:15:45.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:15:46 np0005540825 nova_compute[256151]: 2025-12-01 10:15:46.022 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:15:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:46 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcb40035c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:46 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc8001cd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:46 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:15:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:46 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:15:46 np0005540825 nova_compute[256151]: 2025-12-01 10:15:46.739 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:15:46 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:46.740 163291 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '36:10:da', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '4e:5c:35:98:90:37'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 05:15:46 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:46.742 163291 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 05:15:46 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v754: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s rd, 17 KiB/s wr, 3 op/s
Dec  1 05:15:46 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:15:46.744 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4d9738cf-2abf-48e2-9303-677669784912, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:15:47 np0005540825 nova_compute[256151]: 2025-12-01 10:15:47.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:15:47 np0005540825 nova_compute[256151]: 2025-12-01 10:15:47.027 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 05:15:47 np0005540825 nova_compute[256151]: 2025-12-01 10:15:47.028 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 05:15:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:15:47.194Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:15:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:15:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:15:47.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:15:47 np0005540825 nova_compute[256151]: 2025-12-01 10:15:47.585 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "refresh_cache-60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 05:15:47 np0005540825 nova_compute[256151]: 2025-12-01 10:15:47.586 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquired lock "refresh_cache-60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 05:15:47 np0005540825 nova_compute[256151]: 2025-12-01 10:15:47.586 256155 DEBUG nova.network.neutron [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 05:15:47 np0005540825 nova_compute[256151]: 2025-12-01 10:15:47.587 256155 DEBUG nova.objects.instance [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 05:15:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:47 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:15:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:15:47.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:15:48 np0005540825 nova_compute[256151]: 2025-12-01 10:15:48.109 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:15:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:48 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:48 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcb40035c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:48 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v755: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 4.2 KiB/s wr, 2 op/s
Dec  1 05:15:48 np0005540825 nova_compute[256151]: 2025-12-01 10:15:48.874 256155 DEBUG nova.network.neutron [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Updating instance_info_cache with network_info: [{"id": "f76722ac-216e-4706-9ca6-804d90bbbc7f", "address": "fa:16:3e:64:86:43", "network": {"id": "8c466ba6-3850-4dac-846e-cf97ed839b53", "bridge": "br-int", "label": "tempest-network-smoke--1786448833", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf76722ac-21", "ovs_interfaceid": "f76722ac-216e-4706-9ca6-804d90bbbc7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 05:15:48 np0005540825 nova_compute[256151]: 2025-12-01 10:15:48.893 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Releasing lock "refresh_cache-60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 05:15:48 np0005540825 nova_compute[256151]: 2025-12-01 10:15:48.894 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 05:15:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:15:48.998Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:15:49 np0005540825 podman[262980]: 2025-12-01 10:15:49.228736319 +0000 UTC m=+0.082538562 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  1 05:15:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:15:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:15:49.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:15:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:49 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  1 05:15:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:49 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc8001cd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:15:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:15:49.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:15:50 np0005540825 nova_compute[256151]: 2025-12-01 10:15:50.016 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:15:50 np0005540825 nova_compute[256151]: 2025-12-01 10:15:50.026 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:15:50 np0005540825 nova_compute[256151]: 2025-12-01 10:15:50.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:15:50 np0005540825 nova_compute[256151]: 2025-12-01 10:15:50.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:15:50 np0005540825 nova_compute[256151]: 2025-12-01 10:15:50.028 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:15:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:15:50 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:50 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:50 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:50 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:50 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v756: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 5.2 KiB/s wr, 2 op/s
Dec  1 05:15:51 np0005540825 nova_compute[256151]: 2025-12-01 10:15:51.026 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:15:51 np0005540825 nova_compute[256151]: 2025-12-01 10:15:51.058 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:15:51 np0005540825 nova_compute[256151]: 2025-12-01 10:15:51.059 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:15:51 np0005540825 nova_compute[256151]: 2025-12-01 10:15:51.059 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:15:51 np0005540825 nova_compute[256151]: 2025-12-01 10:15:51.059 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 05:15:51 np0005540825 nova_compute[256151]: 2025-12-01 10:15:51.060 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:15:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:15:51] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Dec  1 05:15:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:15:51] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Dec  1 05:15:51 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:15:51 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1649284944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:15:51 np0005540825 nova_compute[256151]: 2025-12-01 10:15:51.583 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:15:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:15:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:15:51.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:15:51 np0005540825 nova_compute[256151]: 2025-12-01 10:15:51.660 256155 DEBUG nova.virt.libvirt.driver [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  1 05:15:51 np0005540825 nova_compute[256151]: 2025-12-01 10:15:51.660 256155 DEBUG nova.virt.libvirt.driver [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  1 05:15:51 np0005540825 nova_compute[256151]: 2025-12-01 10:15:51.821 256155 WARNING nova.virt.libvirt.driver [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 05:15:51 np0005540825 nova_compute[256151]: 2025-12-01 10:15:51.822 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4403MB free_disk=59.94269943237305GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 05:15:51 np0005540825 nova_compute[256151]: 2025-12-01 10:15:51.822 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:15:51 np0005540825 nova_compute[256151]: 2025-12-01 10:15:51.822 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:15:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:51 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcb40035e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:15:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:15:51.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:15:51 np0005540825 nova_compute[256151]: 2025-12-01 10:15:51.883 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Instance 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 05:15:51 np0005540825 nova_compute[256151]: 2025-12-01 10:15:51.884 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 05:15:51 np0005540825 nova_compute[256151]: 2025-12-01 10:15:51.884 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 05:15:51 np0005540825 nova_compute[256151]: 2025-12-01 10:15:51.925 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:15:52 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:15:52 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1405939835' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:15:52 np0005540825 nova_compute[256151]: 2025-12-01 10:15:52.385 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:15:52 np0005540825 nova_compute[256151]: 2025-12-01 10:15:52.394 256155 DEBUG nova.compute.provider_tree [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Updating inventory in ProviderTree for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 05:15:52 np0005540825 nova_compute[256151]: 2025-12-01 10:15:52.432 256155 ERROR nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] [req-6d9012b1-d9f9-4057-bc69-0853f1821f58] Failed to update inventory to [{'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 5efe20fe-1981-4bd9-8786-d9fddc89a5ae.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-6d9012b1-d9f9-4057-bc69-0853f1821f58"}]}#033[00m
Dec  1 05:15:52 np0005540825 nova_compute[256151]: 2025-12-01 10:15:52.455 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Refreshing inventories for resource provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  1 05:15:52 np0005540825 nova_compute[256151]: 2025-12-01 10:15:52.487 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Updating ProviderTree inventory for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  1 05:15:52 np0005540825 nova_compute[256151]: 2025-12-01 10:15:52.488 256155 DEBUG nova.compute.provider_tree [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Updating inventory in ProviderTree for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 05:15:52 np0005540825 nova_compute[256151]: 2025-12-01 10:15:52.506 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Refreshing aggregate associations for resource provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  1 05:15:52 np0005540825 nova_compute[256151]: 2025-12-01 10:15:52.526 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Refreshing trait associations for resource provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae, traits: HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE4A,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_MMX,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_BMI,HW_CPU_X86_SVM,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AMD_SVM,HW_CPU_X86_BMI2,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE42,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,HW_CPU_X86_ABM,COMPUTE_SECURITY_UEFI_SECURE_BOOT _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  1 05:15:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:52 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc80029e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:52 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:52 np0005540825 nova_compute[256151]: 2025-12-01 10:15:52.566 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:15:52 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v757: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 2.0 KiB/s wr, 3 op/s
Dec  1 05:15:53 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:15:53 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/297688288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:15:53 np0005540825 nova_compute[256151]: 2025-12-01 10:15:53.068 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:15:53 np0005540825 nova_compute[256151]: 2025-12-01 10:15:53.076 256155 DEBUG nova.compute.provider_tree [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Updating inventory in ProviderTree for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 05:15:53 np0005540825 nova_compute[256151]: 2025-12-01 10:15:53.112 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:15:53 np0005540825 nova_compute[256151]: 2025-12-01 10:15:53.124 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Updated inventory for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Dec  1 05:15:53 np0005540825 nova_compute[256151]: 2025-12-01 10:15:53.125 256155 DEBUG nova.compute.provider_tree [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Updating resource provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Dec  1 05:15:53 np0005540825 nova_compute[256151]: 2025-12-01 10:15:53.125 256155 DEBUG nova.compute.provider_tree [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Updating inventory in ProviderTree for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 05:15:53 np0005540825 nova_compute[256151]: 2025-12-01 10:15:53.161 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 05:15:53 np0005540825 nova_compute[256151]: 2025-12-01 10:15:53.162 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.340s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:15:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:15:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:15:53.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:15:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:15:53.633Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:15:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:53 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:15:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:15:53.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:15:54 np0005540825 nova_compute[256151]: 2025-12-01 10:15:54.163 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:15:54 np0005540825 nova_compute[256151]: 2025-12-01 10:15:54.164 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:15:54 np0005540825 nova_compute[256151]: 2025-12-01 10:15:54.164 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:15:54 np0005540825 nova_compute[256151]: 2025-12-01 10:15:54.164 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 05:15:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:54 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/101554 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 05:15:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:54 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:15:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:15:54 np0005540825 podman[263096]: 2025-12-01 10:15:54.546588669 +0000 UTC m=+0.063562659 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec  1 05:15:54 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v758: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 1.9 KiB/s wr, 3 op/s
Dec  1 05:15:55 np0005540825 nova_compute[256151]: 2025-12-01 10:15:55.069 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:15:55 np0005540825 podman[263212]: 2025-12-01 10:15:55.228896082 +0000 UTC m=+0.100969811 container exec 04e54403a63b389bbbec1024baf07fe083822adf8debfd9c927a52a06f70b8a1 (image=quay.io/ceph/ceph:v19, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec  1 05:15:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:15:55 np0005540825 podman[263212]: 2025-12-01 10:15:55.350148129 +0000 UTC m=+0.222221798 container exec_died 04e54403a63b389bbbec1024baf07fe083822adf8debfd9c927a52a06f70b8a1 (image=quay.io/ceph/ceph:v19, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:15:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:15:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:15:55.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:15:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/101555 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  1 05:15:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:55 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efc98000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:15:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:15:55.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:15:55 np0005540825 podman[263334]: 2025-12-01 10:15:55.971880316 +0000 UTC m=+0.084653599 container exec 6f6cf01cf4add71c311676e9908aca30b90b94b7eb4eed46b57a6078721d520f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 05:15:55 np0005540825 podman[263334]: 2025-12-01 10:15:55.984791335 +0000 UTC m=+0.097564558 container exec_died 6f6cf01cf4add71c311676e9908aca30b90b94b7eb4eed46b57a6078721d520f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 05:15:56 np0005540825 podman[263424]: 2025-12-01 10:15:56.449504967 +0000 UTC m=+0.081922495 container exec 175072eb9ad8288754525f1835b155d486baa9b9919fdcbe6ed4f80c20993ee5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  1 05:15:56 np0005540825 podman[263424]: 2025-12-01 10:15:56.463651459 +0000 UTC m=+0.096068987 container exec_died 175072eb9ad8288754525f1835b155d486baa9b9919fdcbe6ed4f80c20993ee5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:15:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:56 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc80029e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:56 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:56 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v759: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.0 KiB/s wr, 10 op/s
Dec  1 05:15:56 np0005540825 podman[263489]: 2025-12-01 10:15:56.771925623 +0000 UTC m=+0.096203822 container exec 0ce6b28b78cdc773acbae8987038033199adf9f2d08be5b101f663b41bdbf569 (image=quay.io/ceph/haproxy:2.3, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd)
Dec  1 05:15:56 np0005540825 podman[263489]: 2025-12-01 10:15:56.784600915 +0000 UTC m=+0.108879084 container exec_died 0ce6b28b78cdc773acbae8987038033199adf9f2d08be5b101f663b41bdbf569 (image=quay.io/ceph/haproxy:2.3, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd)
Dec  1 05:15:57 np0005540825 podman[263555]: 2025-12-01 10:15:57.016484494 +0000 UTC m=+0.062746078 container exec a5bc912f6140365e8fac95a046d1f1cd854ca55aaf2d1e10454f7fa95d0346ac (image=quay.io/ceph/keepalived:2.2.4, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-nfs-cephfs-compute-0-gzwexr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, release=1793, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git)
Dec  1 05:15:57 np0005540825 podman[263555]: 2025-12-01 10:15:57.029475955 +0000 UTC m=+0.075737539 container exec_died a5bc912f6140365e8fac95a046d1f1cd854ca55aaf2d1e10454f7fa95d0346ac (image=quay.io/ceph/keepalived:2.2.4, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-nfs-cephfs-compute-0-gzwexr, release=1793, version=2.2.4, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, distribution-scope=public, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git)
Dec  1 05:15:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:15:57.194Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:15:57 np0005540825 podman[263621]: 2025-12-01 10:15:57.36074657 +0000 UTC m=+0.134851917 container exec fa43ac72a8a6a2863fa517cbc53fe118714aa74f1d9b620c1e40de173c893c3c (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 05:15:57 np0005540825 podman[263621]: 2025-12-01 10:15:57.40589983 +0000 UTC m=+0.180005167 container exec_died fa43ac72a8a6a2863fa517cbc53fe118714aa74f1d9b620c1e40de173c893c3c (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 05:15:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:15:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:15:57.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:15:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:57 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:15:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:15:57.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:15:57 np0005540825 podman[263697]: 2025-12-01 10:15:57.934231952 +0000 UTC m=+0.131936457 container exec 2e1200771a4f85a610f0f173c3c2000346e63d85e37d815d1d1db9886b52c917 (image=quay.io/ceph/grafana:10.4.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 05:15:58 np0005540825 podman[263697]: 2025-12-01 10:15:58.106956211 +0000 UTC m=+0.304660686 container exec_died 2e1200771a4f85a610f0f173c3c2000346e63d85e37d815d1d1db9886b52c917 (image=quay.io/ceph/grafana:10.4.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 05:15:58 np0005540825 nova_compute[256151]: 2025-12-01 10:15:58.114 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:15:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:58 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efc980016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:58 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc80029e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:58 np0005540825 podman[263807]: 2025-12-01 10:15:58.616086723 +0000 UTC m=+0.079397857 container exec f4d1dfb280c04c299aa8be4743fa19bf2fe3a6e302067b3bdeba477b91d1a552 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 05:15:58 np0005540825 podman[263807]: 2025-12-01 10:15:58.656431823 +0000 UTC m=+0.119742837 container exec_died f4d1dfb280c04c299aa8be4743fa19bf2fe3a6e302067b3bdeba477b91d1a552 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 05:15:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 05:15:58 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v760: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.5 KiB/s wr, 8 op/s
Dec  1 05:15:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:15:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 05:15:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:15:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:15:58.998Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:15:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:15:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:15:59.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:15:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:15:59 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:15:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 05:15:59 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:15:59 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v761: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 KiB/s wr, 9 op/s
Dec  1 05:15:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 05:15:59 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:15:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 05:15:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:15:59 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:15:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:15:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:15:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:15:59.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:15:59 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:15:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 05:15:59 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 05:15:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 05:15:59 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:15:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:15:59 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:15:59 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:15:59 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:15:59 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:15:59 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:16:00 np0005540825 nova_compute[256151]: 2025-12-01 10:16:00.071 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:16:00 np0005540825 ceph-mon[74416]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_FAILED_DAEMON (was: 1 failed cephadm daemon(s))
Dec  1 05:16:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:16:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:00 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:00 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efc980016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:00 np0005540825 podman[264024]: 2025-12-01 10:16:00.653240621 +0000 UTC m=+0.071400761 container create dbeb05d7f1d8421688d74064bca76ea72942c6c2e84690490ea53b49c171e523 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid)
Dec  1 05:16:00 np0005540825 systemd[1]: Started libpod-conmon-dbeb05d7f1d8421688d74064bca76ea72942c6c2e84690490ea53b49c171e523.scope.
Dec  1 05:16:00 np0005540825 podman[264024]: 2025-12-01 10:16:00.62323608 +0000 UTC m=+0.041396200 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:16:00 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:16:00 np0005540825 podman[264024]: 2025-12-01 10:16:00.807563902 +0000 UTC m=+0.225724052 container init dbeb05d7f1d8421688d74064bca76ea72942c6c2e84690490ea53b49c171e523 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_goldwasser, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Dec  1 05:16:00 np0005540825 podman[264024]: 2025-12-01 10:16:00.81452062 +0000 UTC m=+0.232680730 container start dbeb05d7f1d8421688d74064bca76ea72942c6c2e84690490ea53b49c171e523 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_goldwasser, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:16:00 np0005540825 podman[264024]: 2025-12-01 10:16:00.817975074 +0000 UTC m=+0.236135194 container attach dbeb05d7f1d8421688d74064bca76ea72942c6c2e84690490ea53b49c171e523 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec  1 05:16:00 np0005540825 amazing_goldwasser[264040]: 167 167
Dec  1 05:16:00 np0005540825 systemd[1]: libpod-dbeb05d7f1d8421688d74064bca76ea72942c6c2e84690490ea53b49c171e523.scope: Deactivated successfully.
Dec  1 05:16:00 np0005540825 podman[264024]: 2025-12-01 10:16:00.823190905 +0000 UTC m=+0.241351025 container died dbeb05d7f1d8421688d74064bca76ea72942c6c2e84690490ea53b49c171e523 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:16:01 np0005540825 systemd[1]: var-lib-containers-storage-overlay-68220a748234747cfeeb3b86a07e7a177047a38140ef6e221469016f5164fc77-merged.mount: Deactivated successfully.
Dec  1 05:16:01 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:16:01 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:16:01 np0005540825 ceph-mon[74416]: Health check cleared: CEPHADM_FAILED_DAEMON (was: 1 failed cephadm daemon(s))
Dec  1 05:16:01 np0005540825 podman[264024]: 2025-12-01 10:16:01.053312365 +0000 UTC m=+0.471472495 container remove dbeb05d7f1d8421688d74064bca76ea72942c6c2e84690490ea53b49c171e523 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_goldwasser, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Dec  1 05:16:01 np0005540825 systemd[1]: libpod-conmon-dbeb05d7f1d8421688d74064bca76ea72942c6c2e84690490ea53b49c171e523.scope: Deactivated successfully.
Dec  1 05:16:01 np0005540825 podman[264067]: 2025-12-01 10:16:01.264270098 +0000 UTC m=+0.044644748 container create d75a2d9a315157098b1b2fcf20ccfe80a9eccc86acf71003f624037345de318a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:16:01 np0005540825 systemd[1]: Started libpod-conmon-d75a2d9a315157098b1b2fcf20ccfe80a9eccc86acf71003f624037345de318a.scope.
Dec  1 05:16:01 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:16:01 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46459b3d97b134e5660f65b404b7cddeb2e27d6df2d03f00f8f89ad30c8bb603/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:16:01 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46459b3d97b134e5660f65b404b7cddeb2e27d6df2d03f00f8f89ad30c8bb603/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:16:01 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46459b3d97b134e5660f65b404b7cddeb2e27d6df2d03f00f8f89ad30c8bb603/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:16:01 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46459b3d97b134e5660f65b404b7cddeb2e27d6df2d03f00f8f89ad30c8bb603/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:16:01 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46459b3d97b134e5660f65b404b7cddeb2e27d6df2d03f00f8f89ad30c8bb603/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:16:01 np0005540825 podman[264067]: 2025-12-01 10:16:01.24586406 +0000 UTC m=+0.026238750 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:16:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:16:01] "GET /metrics HTTP/1.1" 200 48548 "" "Prometheus/2.51.0"
Dec  1 05:16:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:16:01] "GET /metrics HTTP/1.1" 200 48548 "" "Prometheus/2.51.0"
Dec  1 05:16:01 np0005540825 podman[264067]: 2025-12-01 10:16:01.420034629 +0000 UTC m=+0.200409299 container init d75a2d9a315157098b1b2fcf20ccfe80a9eccc86acf71003f624037345de318a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec  1 05:16:01 np0005540825 podman[264067]: 2025-12-01 10:16:01.429101164 +0000 UTC m=+0.209475814 container start d75a2d9a315157098b1b2fcf20ccfe80a9eccc86acf71003f624037345de318a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_stonebraker, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec  1 05:16:01 np0005540825 podman[264067]: 2025-12-01 10:16:01.486411133 +0000 UTC m=+0.266785823 container attach d75a2d9a315157098b1b2fcf20ccfe80a9eccc86acf71003f624037345de318a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_stonebraker, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  1 05:16:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:16:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:16:01.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:16:01 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v762: 353 pgs: 353 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 39 op/s
Dec  1 05:16:01 np0005540825 vigorous_stonebraker[264084]: --> passed data devices: 0 physical, 1 LVM
Dec  1 05:16:01 np0005540825 vigorous_stonebraker[264084]: --> All data devices are unavailable
Dec  1 05:16:01 np0005540825 systemd[1]: libpod-d75a2d9a315157098b1b2fcf20ccfe80a9eccc86acf71003f624037345de318a.scope: Deactivated successfully.
Dec  1 05:16:01 np0005540825 podman[264067]: 2025-12-01 10:16:01.826902427 +0000 UTC m=+0.607277147 container died d75a2d9a315157098b1b2fcf20ccfe80a9eccc86acf71003f624037345de318a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_stonebraker, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:16:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:01 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc8003e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:01 np0005540825 systemd[1]: var-lib-containers-storage-overlay-46459b3d97b134e5660f65b404b7cddeb2e27d6df2d03f00f8f89ad30c8bb603-merged.mount: Deactivated successfully.
Dec  1 05:16:01 np0005540825 podman[264067]: 2025-12-01 10:16:01.876803906 +0000 UTC m=+0.657178586 container remove d75a2d9a315157098b1b2fcf20ccfe80a9eccc86acf71003f624037345de318a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_stonebraker, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:16:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:16:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:16:01.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:16:01 np0005540825 systemd[1]: libpod-conmon-d75a2d9a315157098b1b2fcf20ccfe80a9eccc86acf71003f624037345de318a.scope: Deactivated successfully.
Dec  1 05:16:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:02 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:02 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:02 np0005540825 podman[264203]: 2025-12-01 10:16:02.597423925 +0000 UTC m=+0.070408195 container create 010f86ee343bfb84c5c437431190ea58d0d0f20a62d7a06f93508cb1d5460e2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_gates, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec  1 05:16:02 np0005540825 systemd[1]: Started libpod-conmon-010f86ee343bfb84c5c437431190ea58d0d0f20a62d7a06f93508cb1d5460e2c.scope.
Dec  1 05:16:02 np0005540825 podman[264203]: 2025-12-01 10:16:02.568950555 +0000 UTC m=+0.041934885 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:16:02 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:16:02 np0005540825 podman[264203]: 2025-12-01 10:16:02.690781288 +0000 UTC m=+0.163765558 container init 010f86ee343bfb84c5c437431190ea58d0d0f20a62d7a06f93508cb1d5460e2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  1 05:16:02 np0005540825 podman[264203]: 2025-12-01 10:16:02.701890278 +0000 UTC m=+0.174874548 container start 010f86ee343bfb84c5c437431190ea58d0d0f20a62d7a06f93508cb1d5460e2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_gates, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:16:02 np0005540825 podman[264203]: 2025-12-01 10:16:02.705879166 +0000 UTC m=+0.178863426 container attach 010f86ee343bfb84c5c437431190ea58d0d0f20a62d7a06f93508cb1d5460e2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Dec  1 05:16:02 np0005540825 wizardly_gates[264219]: 167 167
Dec  1 05:16:02 np0005540825 systemd[1]: libpod-010f86ee343bfb84c5c437431190ea58d0d0f20a62d7a06f93508cb1d5460e2c.scope: Deactivated successfully.
Dec  1 05:16:02 np0005540825 podman[264203]: 2025-12-01 10:16:02.709387291 +0000 UTC m=+0.182371571 container died 010f86ee343bfb84c5c437431190ea58d0d0f20a62d7a06f93508cb1d5460e2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_gates, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec  1 05:16:02 np0005540825 systemd[1]: var-lib-containers-storage-overlay-ce9c4494395a9786975d9bba669101c8f339578fcb471eecbce8e5fc4961a60d-merged.mount: Deactivated successfully.
Dec  1 05:16:02 np0005540825 podman[264203]: 2025-12-01 10:16:02.751289464 +0000 UTC m=+0.224273734 container remove 010f86ee343bfb84c5c437431190ea58d0d0f20a62d7a06f93508cb1d5460e2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_gates, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec  1 05:16:02 np0005540825 systemd[1]: libpod-conmon-010f86ee343bfb84c5c437431190ea58d0d0f20a62d7a06f93508cb1d5460e2c.scope: Deactivated successfully.
Dec  1 05:16:03 np0005540825 podman[264244]: 2025-12-01 10:16:03.002905665 +0000 UTC m=+0.077348871 container create 4da6d4fe959281d49bfa4a68a22af903be2818a504dcfb555cb192b728dc437a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:16:03 np0005540825 systemd[1]: Started libpod-conmon-4da6d4fe959281d49bfa4a68a22af903be2818a504dcfb555cb192b728dc437a.scope.
Dec  1 05:16:03 np0005540825 podman[264244]: 2025-12-01 10:16:02.973419708 +0000 UTC m=+0.047862984 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:16:03 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:16:03 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1923d85460dd2ab5cde2511bd972857ee0e2b789c8fcd887913c531f4ae2e023/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:16:03 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1923d85460dd2ab5cde2511bd972857ee0e2b789c8fcd887913c531f4ae2e023/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:16:03 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1923d85460dd2ab5cde2511bd972857ee0e2b789c8fcd887913c531f4ae2e023/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:16:03 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1923d85460dd2ab5cde2511bd972857ee0e2b789c8fcd887913c531f4ae2e023/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:16:03 np0005540825 nova_compute[256151]: 2025-12-01 10:16:03.117 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:16:03 np0005540825 podman[264244]: 2025-12-01 10:16:03.129829386 +0000 UTC m=+0.204272662 container init 4da6d4fe959281d49bfa4a68a22af903be2818a504dcfb555cb192b728dc437a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  1 05:16:03 np0005540825 podman[264244]: 2025-12-01 10:16:03.147795962 +0000 UTC m=+0.222239178 container start 4da6d4fe959281d49bfa4a68a22af903be2818a504dcfb555cb192b728dc437a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_liskov, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:16:03 np0005540825 podman[264244]: 2025-12-01 10:16:03.152437268 +0000 UTC m=+0.226880524 container attach 4da6d4fe959281d49bfa4a68a22af903be2818a504dcfb555cb192b728dc437a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:16:03 np0005540825 romantic_liskov[264260]: {
Dec  1 05:16:03 np0005540825 romantic_liskov[264260]:    "1": [
Dec  1 05:16:03 np0005540825 romantic_liskov[264260]:        {
Dec  1 05:16:03 np0005540825 romantic_liskov[264260]:            "devices": [
Dec  1 05:16:03 np0005540825 romantic_liskov[264260]:                "/dev/loop3"
Dec  1 05:16:03 np0005540825 romantic_liskov[264260]:            ],
Dec  1 05:16:03 np0005540825 romantic_liskov[264260]:            "lv_name": "ceph_lv0",
Dec  1 05:16:03 np0005540825 romantic_liskov[264260]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:16:03 np0005540825 romantic_liskov[264260]:            "lv_size": "21470642176",
Dec  1 05:16:03 np0005540825 romantic_liskov[264260]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 05:16:03 np0005540825 romantic_liskov[264260]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:16:03 np0005540825 romantic_liskov[264260]:            "name": "ceph_lv0",
Dec  1 05:16:03 np0005540825 romantic_liskov[264260]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:16:03 np0005540825 romantic_liskov[264260]:            "tags": {
Dec  1 05:16:03 np0005540825 romantic_liskov[264260]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:16:03 np0005540825 romantic_liskov[264260]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:16:03 np0005540825 romantic_liskov[264260]:                "ceph.cephx_lockbox_secret": "",
Dec  1 05:16:03 np0005540825 romantic_liskov[264260]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 05:16:03 np0005540825 romantic_liskov[264260]:                "ceph.cluster_name": "ceph",
Dec  1 05:16:03 np0005540825 romantic_liskov[264260]:                "ceph.crush_device_class": "",
Dec  1 05:16:03 np0005540825 romantic_liskov[264260]:                "ceph.encrypted": "0",
Dec  1 05:16:03 np0005540825 romantic_liskov[264260]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 05:16:03 np0005540825 romantic_liskov[264260]:                "ceph.osd_id": "1",
Dec  1 05:16:03 np0005540825 romantic_liskov[264260]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 05:16:03 np0005540825 romantic_liskov[264260]:                "ceph.type": "block",
Dec  1 05:16:03 np0005540825 romantic_liskov[264260]:                "ceph.vdo": "0",
Dec  1 05:16:03 np0005540825 romantic_liskov[264260]:                "ceph.with_tpm": "0"
Dec  1 05:16:03 np0005540825 romantic_liskov[264260]:            },
Dec  1 05:16:03 np0005540825 romantic_liskov[264260]:            "type": "block",
Dec  1 05:16:03 np0005540825 romantic_liskov[264260]:            "vg_name": "ceph_vg0"
Dec  1 05:16:03 np0005540825 romantic_liskov[264260]:        }
Dec  1 05:16:03 np0005540825 romantic_liskov[264260]:    ]
Dec  1 05:16:03 np0005540825 romantic_liskov[264260]: }
Dec  1 05:16:03 np0005540825 systemd[1]: libpod-4da6d4fe959281d49bfa4a68a22af903be2818a504dcfb555cb192b728dc437a.scope: Deactivated successfully.
Dec  1 05:16:03 np0005540825 podman[264270]: 2025-12-01 10:16:03.550719174 +0000 UTC m=+0.025889401 container died 4da6d4fe959281d49bfa4a68a22af903be2818a504dcfb555cb192b728dc437a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  1 05:16:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:16:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:16:03.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:16:03 np0005540825 systemd[1]: var-lib-containers-storage-overlay-1923d85460dd2ab5cde2511bd972857ee0e2b789c8fcd887913c531f4ae2e023-merged.mount: Deactivated successfully.
Dec  1 05:16:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:16:03.634Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:16:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:16:03.634Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:16:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:16:03.635Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:16:03 np0005540825 podman[264270]: 2025-12-01 10:16:03.656624137 +0000 UTC m=+0.131794334 container remove 4da6d4fe959281d49bfa4a68a22af903be2818a504dcfb555cb192b728dc437a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_liskov, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  1 05:16:03 np0005540825 systemd[1]: libpod-conmon-4da6d4fe959281d49bfa4a68a22af903be2818a504dcfb555cb192b728dc437a.scope: Deactivated successfully.
Dec  1 05:16:03 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v763: 353 pgs: 353 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 37 op/s
Dec  1 05:16:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:03 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efc980016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:03 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:16:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:16:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:16:03.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:16:04 np0005540825 podman[264406]: 2025-12-01 10:16:04.431634826 +0000 UTC m=+0.060712702 container create d36ef00012d305b0735912fb87df984b591defbc3ec64904943f321d27880171 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  1 05:16:04 np0005540825 systemd[1]: Started libpod-conmon-d36ef00012d305b0735912fb87df984b591defbc3ec64904943f321d27880171.scope.
Dec  1 05:16:04 np0005540825 podman[264406]: 2025-12-01 10:16:04.408999845 +0000 UTC m=+0.038077801 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:16:04 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:16:04 np0005540825 podman[264406]: 2025-12-01 10:16:04.530051887 +0000 UTC m=+0.159129823 container init d36ef00012d305b0735912fb87df984b591defbc3ec64904943f321d27880171 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  1 05:16:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:04 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc8003e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:04 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:04 np0005540825 podman[264406]: 2025-12-01 10:16:04.543705216 +0000 UTC m=+0.172783132 container start d36ef00012d305b0735912fb87df984b591defbc3ec64904943f321d27880171 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_keller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:16:04 np0005540825 angry_keller[264423]: 167 167
Dec  1 05:16:04 np0005540825 systemd[1]: libpod-d36ef00012d305b0735912fb87df984b591defbc3ec64904943f321d27880171.scope: Deactivated successfully.
Dec  1 05:16:04 np0005540825 podman[264406]: 2025-12-01 10:16:04.552967056 +0000 UTC m=+0.182044972 container attach d36ef00012d305b0735912fb87df984b591defbc3ec64904943f321d27880171 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  1 05:16:04 np0005540825 podman[264406]: 2025-12-01 10:16:04.553577533 +0000 UTC m=+0.182655439 container died d36ef00012d305b0735912fb87df984b591defbc3ec64904943f321d27880171 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:16:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:16:04.571 163291 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:16:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:16:04.573 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:16:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:16:04.574 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:16:04 np0005540825 systemd[1]: var-lib-containers-storage-overlay-05c0fc8cf1073cf2584ed169669cc9a369fe4ef40951b28254ff58a50a5e9fa7-merged.mount: Deactivated successfully.
Dec  1 05:16:04 np0005540825 podman[264406]: 2025-12-01 10:16:04.606827152 +0000 UTC m=+0.235905038 container remove d36ef00012d305b0735912fb87df984b591defbc3ec64904943f321d27880171 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_keller, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec  1 05:16:04 np0005540825 systemd[1]: libpod-conmon-d36ef00012d305b0735912fb87df984b591defbc3ec64904943f321d27880171.scope: Deactivated successfully.
Dec  1 05:16:04 np0005540825 podman[264420]: 2025-12-01 10:16:04.629502775 +0000 UTC m=+0.140161630 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  1 05:16:04 np0005540825 podman[264472]: 2025-12-01 10:16:04.823755926 +0000 UTC m=+0.066036766 container create d4c3a999987625edd92d9bb58b6cd26062c259e9d5ed49b94ffaf7e6ea6dbc9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_morse, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec  1 05:16:04 np0005540825 systemd[1]: Started libpod-conmon-d4c3a999987625edd92d9bb58b6cd26062c259e9d5ed49b94ffaf7e6ea6dbc9c.scope.
Dec  1 05:16:04 np0005540825 podman[264472]: 2025-12-01 10:16:04.79541541 +0000 UTC m=+0.037696310 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:16:04 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:16:04 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7bf21c7c00d5b613f42e0f618c16de1be237b8879406fcefac36fb91afa35bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:16:04 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7bf21c7c00d5b613f42e0f618c16de1be237b8879406fcefac36fb91afa35bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:16:04 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7bf21c7c00d5b613f42e0f618c16de1be237b8879406fcefac36fb91afa35bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:16:04 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7bf21c7c00d5b613f42e0f618c16de1be237b8879406fcefac36fb91afa35bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:16:04 np0005540825 podman[264472]: 2025-12-01 10:16:04.934196242 +0000 UTC m=+0.176477072 container init d4c3a999987625edd92d9bb58b6cd26062c259e9d5ed49b94ffaf7e6ea6dbc9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325)
Dec  1 05:16:04 np0005540825 podman[264472]: 2025-12-01 10:16:04.950203634 +0000 UTC m=+0.192484474 container start d4c3a999987625edd92d9bb58b6cd26062c259e9d5ed49b94ffaf7e6ea6dbc9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_morse, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:16:04 np0005540825 podman[264472]: 2025-12-01 10:16:04.954660375 +0000 UTC m=+0.196941255 container attach d4c3a999987625edd92d9bb58b6cd26062c259e9d5ed49b94ffaf7e6ea6dbc9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_morse, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:16:05 np0005540825 nova_compute[256151]: 2025-12-01 10:16:05.107 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:16:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:16:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:16:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:16:05.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:16:05 np0005540825 lvm[264564]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 05:16:05 np0005540825 lvm[264564]: VG ceph_vg0 finished
Dec  1 05:16:05 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v764: 353 pgs: 353 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 48 op/s
Dec  1 05:16:05 np0005540825 jovial_morse[264488]: {}
Dec  1 05:16:05 np0005540825 systemd[1]: libpod-d4c3a999987625edd92d9bb58b6cd26062c259e9d5ed49b94ffaf7e6ea6dbc9c.scope: Deactivated successfully.
Dec  1 05:16:05 np0005540825 systemd[1]: libpod-d4c3a999987625edd92d9bb58b6cd26062c259e9d5ed49b94ffaf7e6ea6dbc9c.scope: Consumed 1.365s CPU time.
Dec  1 05:16:05 np0005540825 podman[264472]: 2025-12-01 10:16:05.806382597 +0000 UTC m=+1.048663437 container died d4c3a999987625edd92d9bb58b6cd26062c259e9d5ed49b94ffaf7e6ea6dbc9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_morse, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec  1 05:16:05 np0005540825 systemd[1]: var-lib-containers-storage-overlay-d7bf21c7c00d5b613f42e0f618c16de1be237b8879406fcefac36fb91afa35bf-merged.mount: Deactivated successfully.
Dec  1 05:16:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:05 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:05 np0005540825 podman[264472]: 2025-12-01 10:16:05.867257563 +0000 UTC m=+1.109538393 container remove d4c3a999987625edd92d9bb58b6cd26062c259e9d5ed49b94ffaf7e6ea6dbc9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_morse, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  1 05:16:05 np0005540825 systemd[1]: libpod-conmon-d4c3a999987625edd92d9bb58b6cd26062c259e9d5ed49b94ffaf7e6ea6dbc9c.scope: Deactivated successfully.
Dec  1 05:16:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:16:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:16:05.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:16:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 05:16:05 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:16:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 05:16:05 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:16:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:06 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efc98002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:06 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc8003e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:06 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:16:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:06 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:16:07 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:16:07 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:16:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:16:07.195Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:16:07 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:07 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:16:07 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:16:07.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:16:07 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v765: 353 pgs: 353 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 2.0 MiB/s wr, 43 op/s
Dec  1 05:16:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:07 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:07 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:07 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:16:07 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:16:07.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:16:08 np0005540825 nova_compute[256151]: 2025-12-01 10:16:08.121 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:16:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:08 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:08 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efc98002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:16:08.999Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:16:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Dec  1 05:16:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:16:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:16:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:16:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:16:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:16:09.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:16:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:16:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:16:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:16:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:16:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:16:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:16:09 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v766: 353 pgs: 353 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 2.0 MiB/s wr, 43 op/s
Dec  1 05:16:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:09 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc8004b80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:09 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  1 05:16:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:16:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:16:09.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:16:10 np0005540825 nova_compute[256151]: 2025-12-01 10:16:10.107 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:16:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:16:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:10 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca4003c30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:10 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:16:11] "GET /metrics HTTP/1.1" 200 48546 "" "Prometheus/2.51.0"
Dec  1 05:16:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:16:11] "GET /metrics HTTP/1.1" 200 48546 "" "Prometheus/2.51.0"
Dec  1 05:16:11 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:16:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:16:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:16:11.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:16:11 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v767: 353 pgs: 353 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Dec  1 05:16:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:11 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efc98002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:16:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:16:11.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:16:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:12 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc8004b80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:12 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca4003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:13 np0005540825 nova_compute[256151]: 2025-12-01 10:16:13.123 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:16:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:16:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:16:13.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:16:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:16:13.637Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:16:13 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v768: 353 pgs: 353 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 77 op/s
Dec  1 05:16:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:13 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:16:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:16:13.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:16:14 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Dec  1 05:16:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:16:14.374566) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  1 05:16:14 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Dec  1 05:16:14 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584174374608, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1230, "num_deletes": 251, "total_data_size": 2154147, "memory_usage": 2185040, "flush_reason": "Manual Compaction"}
Dec  1 05:16:14 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Dec  1 05:16:14 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584174391038, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 2079191, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23653, "largest_seqno": 24882, "table_properties": {"data_size": 2073500, "index_size": 3022, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12889, "raw_average_key_size": 20, "raw_value_size": 2061638, "raw_average_value_size": 3221, "num_data_blocks": 135, "num_entries": 640, "num_filter_entries": 640, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764584076, "oldest_key_time": 1764584076, "file_creation_time": 1764584174, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Dec  1 05:16:14 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 16532 microseconds, and 5227 cpu microseconds.
Dec  1 05:16:14 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 05:16:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:16:14.391093) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 2079191 bytes OK
Dec  1 05:16:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:16:14.391116) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Dec  1 05:16:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:16:14.392708) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Dec  1 05:16:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:16:14.392724) EVENT_LOG_v1 {"time_micros": 1764584174392719, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  1 05:16:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:16:14.392744) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  1 05:16:14 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 2148658, prev total WAL file size 2148658, number of live WAL files 2.
Dec  1 05:16:14 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:16:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:16:14.393714) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Dec  1 05:16:14 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  1 05:16:14 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(2030KB)], [53(12MB)]
Dec  1 05:16:14 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584174393752, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 15443877, "oldest_snapshot_seqno": -1}
Dec  1 05:16:14 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5432 keys, 13260791 bytes, temperature: kUnknown
Dec  1 05:16:14 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584174467128, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 13260791, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13224516, "index_size": 21573, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13637, "raw_key_size": 139405, "raw_average_key_size": 25, "raw_value_size": 13125993, "raw_average_value_size": 2416, "num_data_blocks": 876, "num_entries": 5432, "num_filter_entries": 5432, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764582410, "oldest_key_time": 0, "file_creation_time": 1764584174, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Dec  1 05:16:14 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 05:16:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:16:14.467439) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 13260791 bytes
Dec  1 05:16:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:16:14.468967) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 210.3 rd, 180.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 12.7 +0.0 blob) out(12.6 +0.0 blob), read-write-amplify(13.8) write-amplify(6.4) OK, records in: 5953, records dropped: 521 output_compression: NoCompression
Dec  1 05:16:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:16:14.469000) EVENT_LOG_v1 {"time_micros": 1764584174468985, "job": 28, "event": "compaction_finished", "compaction_time_micros": 73454, "compaction_time_cpu_micros": 34065, "output_level": 6, "num_output_files": 1, "total_output_size": 13260791, "num_input_records": 5953, "num_output_records": 5432, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  1 05:16:14 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:16:14 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584174469887, "job": 28, "event": "table_file_deletion", "file_number": 55}
Dec  1 05:16:14 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:16:14 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584174474230, "job": 28, "event": "table_file_deletion", "file_number": 53}
Dec  1 05:16:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:16:14.393619) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:16:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:16:14.474377) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:16:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:16:14.474385) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:16:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:16:14.474388) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:16:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:16:14.474391) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:16:14 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:16:14.474394) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:16:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:14 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:14 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efc98003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:15 np0005540825 nova_compute[256151]: 2025-12-01 10:16:15.120 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:16:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:16:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:16:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:16:15.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:16:15 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v769: 353 pgs: 353 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 78 op/s
Dec  1 05:16:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:15 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca4003c70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:16:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:16:15.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:16:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:16 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc8004b80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/101616 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  1 05:16:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:16 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:16 np0005540825 nova_compute[256151]: 2025-12-01 10:16:16.901 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:16:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:16:17.196Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:16:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:16:17.196Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:16:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:16:17.196Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:16:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:16:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:16:17.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:16:17 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v770: 353 pgs: 353 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 67 op/s
Dec  1 05:16:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:17 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:16:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:16:17.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:16:18 np0005540825 nova_compute[256151]: 2025-12-01 10:16:18.126 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:16:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:18 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:18 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca4003c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:16:19.001Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:16:19 np0005540825 ceph-mgr[74709]: [dashboard INFO request] [192.168.122.100:35622] [POST] [200] [0.003s] [4.0B] [418f5420-a262-4f2b-80ea-9071f0177971] /api/prometheus_receiver
Dec  1 05:16:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:16:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:16:19.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:16:19 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v771: 353 pgs: 353 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 KiB/s wr, 65 op/s
Dec  1 05:16:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:19 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efc98003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000028s ======
Dec  1 05:16:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:16:19.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  1 05:16:20 np0005540825 nova_compute[256151]: 2025-12-01 10:16:20.175 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:16:20 np0005540825 podman[264621]: 2025-12-01 10:16:20.301016309 +0000 UTC m=+0.098660478 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  1 05:16:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:16:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:20 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc8004b80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:20 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:16:21] "GET /metrics HTTP/1.1" 200 48546 "" "Prometheus/2.51.0"
Dec  1 05:16:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:16:21] "GET /metrics HTTP/1.1" 200 48546 "" "Prometheus/2.51.0"
Dec  1 05:16:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:16:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:16:21.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:16:21 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v772: 353 pgs: 353 active+clean; 198 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Dec  1 05:16:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:21 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca4003cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:16:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:16:21.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:16:22 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:22 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efc98003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:22 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:22 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc8004b80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:23 np0005540825 nova_compute[256151]: 2025-12-01 10:16:23.127 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:16:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:16:23.638Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:16:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:16:23.638Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:16:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:16:23.638Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:16:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:16:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:16:23.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:16:23 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v773: 353 pgs: 353 active+clean; 198 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 292 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Dec  1 05:16:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:23 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:16:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:16:23.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:16:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:16:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:16:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:24 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:24 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efc98003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:25 np0005540825 nova_compute[256151]: 2025-12-01 10:16:25.179 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:16:25 np0005540825 podman[264672]: 2025-12-01 10:16:25.253509601 +0000 UTC m=+0.095443781 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec  1 05:16:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:16:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:16:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:16:25.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:16:25 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v774: 353 pgs: 353 active+clean; 200 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 302 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec  1 05:16:25 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:25 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc8004b80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:16:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:16:25.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:16:26 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:26 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca4003cf0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:26 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:26 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc00011c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:16:27.197Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:16:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:16:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:16:27.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:16:27 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v775: 353 pgs: 353 active+clean; 200 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 303 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec  1 05:16:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:27 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc8004b80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:16:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:16:27.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:16:28 np0005540825 nova_compute[256151]: 2025-12-01 10:16:28.130 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:16:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:28 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcb40008d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:28 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca4003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:16:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:16:29.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:16:29 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v776: 353 pgs: 353 active+clean; 200 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 303 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec  1 05:16:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:29 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcb40008d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:16:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:16:29.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:16:30 np0005540825 nova_compute[256151]: 2025-12-01 10:16:30.222 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:16:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:16:30 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:30 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc0001ae0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:30 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:30 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc8004b80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:16:31] "GET /metrics HTTP/1.1" 200 48547 "" "Prometheus/2.51.0"
Dec  1 05:16:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:16:31] "GET /metrics HTTP/1.1" 200 48547 "" "Prometheus/2.51.0"
Dec  1 05:16:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:16:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:16:31.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:16:31 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v777: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Dec  1 05:16:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:31 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca4003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:16:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:16:31.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:16:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:32 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcb4000a70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:32 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:32 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc0001ae0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:33 np0005540825 nova_compute[256151]: 2025-12-01 10:16:33.133 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:16:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:16:33.639Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:16:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:16:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:16:33.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:16:33 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v778: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 59 KiB/s wr, 36 op/s
Dec  1 05:16:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:33 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc8004b80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:16:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:16:33.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:16:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:34 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca4003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:34 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcb4000a70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:35 np0005540825 nova_compute[256151]: 2025-12-01 10:16:35.265 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:16:35 np0005540825 podman[264705]: 2025-12-01 10:16:35.289783755 +0000 UTC m=+0.152265457 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  1 05:16:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:16:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:16:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:16:35.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:16:35 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v779: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 68 KiB/s wr, 37 op/s
Dec  1 05:16:35 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:35 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc0001ae0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:16:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:16:35.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:16:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:36 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc0001ae0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:36 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:36 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca4003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:16:37.198Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:16:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:16:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:16:37.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:16:37 np0005540825 ovn_controller[153404]: 2025-12-01T10:16:37Z|00034|binding|INFO|Releasing lport d9b22cb4-2520-4db5-9f61-76a8a39f3543 from this chassis (sb_readonly=0)
Dec  1 05:16:37 np0005540825 nova_compute[256151]: 2025-12-01 10:16:37.752 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:16:37 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v780: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 22 KiB/s wr, 30 op/s
Dec  1 05:16:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:37 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca4003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:16:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:16:37.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:16:38 np0005540825 nova_compute[256151]: 2025-12-01 10:16:38.134 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:16:38 np0005540825 nova_compute[256151]: 2025-12-01 10:16:38.566 256155 DEBUG nova.compute.manager [req-c4c3a32a-2830-4236-b089-9fddbeca42e4 req-089a769a-fd1e-42bf-ac35-3a8d4faa25c4 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Received event network-changed-f76722ac-216e-4706-9ca6-804d90bbbc7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:16:38 np0005540825 nova_compute[256151]: 2025-12-01 10:16:38.566 256155 DEBUG nova.compute.manager [req-c4c3a32a-2830-4236-b089-9fddbeca42e4 req-089a769a-fd1e-42bf-ac35-3a8d4faa25c4 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Refreshing instance network info cache due to event network-changed-f76722ac-216e-4706-9ca6-804d90bbbc7f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 05:16:38 np0005540825 nova_compute[256151]: 2025-12-01 10:16:38.566 256155 DEBUG oslo_concurrency.lockutils [req-c4c3a32a-2830-4236-b089-9fddbeca42e4 req-089a769a-fd1e-42bf-ac35-3a8d4faa25c4 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "refresh_cache-60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 05:16:38 np0005540825 nova_compute[256151]: 2025-12-01 10:16:38.567 256155 DEBUG oslo_concurrency.lockutils [req-c4c3a32a-2830-4236-b089-9fddbeca42e4 req-089a769a-fd1e-42bf-ac35-3a8d4faa25c4 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquired lock "refresh_cache-60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 05:16:38 np0005540825 nova_compute[256151]: 2025-12-01 10:16:38.567 256155 DEBUG nova.network.neutron [req-c4c3a32a-2830-4236-b089-9fddbeca42e4 req-089a769a-fd1e-42bf-ac35-3a8d4faa25c4 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Refreshing network info cache for port f76722ac-216e-4706-9ca6-804d90bbbc7f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 05:16:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:38 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcb4002e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:38 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc8004b80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:38 np0005540825 nova_compute[256151]: 2025-12-01 10:16:38.625 256155 DEBUG oslo_concurrency.lockutils [None req-ce406090-82ff-471b-9cba-2f9c9b63bfee 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:16:38 np0005540825 nova_compute[256151]: 2025-12-01 10:16:38.626 256155 DEBUG oslo_concurrency.lockutils [None req-ce406090-82ff-471b-9cba-2f9c9b63bfee 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:16:38 np0005540825 nova_compute[256151]: 2025-12-01 10:16:38.626 256155 DEBUG oslo_concurrency.lockutils [None req-ce406090-82ff-471b-9cba-2f9c9b63bfee 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:16:38 np0005540825 nova_compute[256151]: 2025-12-01 10:16:38.627 256155 DEBUG oslo_concurrency.lockutils [None req-ce406090-82ff-471b-9cba-2f9c9b63bfee 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:16:38 np0005540825 nova_compute[256151]: 2025-12-01 10:16:38.627 256155 DEBUG oslo_concurrency.lockutils [None req-ce406090-82ff-471b-9cba-2f9c9b63bfee 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:16:38 np0005540825 nova_compute[256151]: 2025-12-01 10:16:38.629 256155 INFO nova.compute.manager [None req-ce406090-82ff-471b-9cba-2f9c9b63bfee 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Terminating instance#033[00m
Dec  1 05:16:38 np0005540825 nova_compute[256151]: 2025-12-01 10:16:38.631 256155 DEBUG nova.compute.manager [None req-ce406090-82ff-471b-9cba-2f9c9b63bfee 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 05:16:38 np0005540825 kernel: tapf76722ac-21 (unregistering): left promiscuous mode
Dec  1 05:16:38 np0005540825 NetworkManager[48963]: <info>  [1764584198.7016] device (tapf76722ac-21): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 05:16:38 np0005540825 ovn_controller[153404]: 2025-12-01T10:16:38Z|00035|binding|INFO|Releasing lport f76722ac-216e-4706-9ca6-804d90bbbc7f from this chassis (sb_readonly=0)
Dec  1 05:16:38 np0005540825 ovn_controller[153404]: 2025-12-01T10:16:38Z|00036|binding|INFO|Setting lport f76722ac-216e-4706-9ca6-804d90bbbc7f down in Southbound
Dec  1 05:16:38 np0005540825 nova_compute[256151]: 2025-12-01 10:16:38.711 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:16:38 np0005540825 ovn_controller[153404]: 2025-12-01T10:16:38Z|00037|binding|INFO|Removing iface tapf76722ac-21 ovn-installed in OVS
Dec  1 05:16:38 np0005540825 nova_compute[256151]: 2025-12-01 10:16:38.713 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:16:38 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:16:38.719 163291 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:86:43 10.100.0.13'], port_security=['fa:16:3e:64:86:43 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c466ba6-3850-4dac-846e-cf97ed839b53', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f6be4e572624210b91193c011607c08', 'neutron:revision_number': '4', 'neutron:security_group_ids': '10e0d4a2-5f12-4bc6-a3e3-16e6e801f68c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c323932-e602-4ad2-aee6-0c52ba24fdb8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3429b436d0>], logical_port=f76722ac-216e-4706-9ca6-804d90bbbc7f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3429b436d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 05:16:38 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:16:38.721 163291 INFO neutron.agent.ovn.metadata.agent [-] Port f76722ac-216e-4706-9ca6-804d90bbbc7f in datapath 8c466ba6-3850-4dac-846e-cf97ed839b53 unbound from our chassis#033[00m
Dec  1 05:16:38 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:16:38.722 163291 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8c466ba6-3850-4dac-846e-cf97ed839b53, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  1 05:16:38 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:16:38.724 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[e0e747e0-e79e-44d8-b798-7499d71f3d23]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:16:38 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:16:38.725 163291 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8c466ba6-3850-4dac-846e-cf97ed839b53 namespace which is not needed anymore#033[00m
Dec  1 05:16:38 np0005540825 nova_compute[256151]: 2025-12-01 10:16:38.730 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:16:38 np0005540825 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Dec  1 05:16:38 np0005540825 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 18.463s CPU time.
Dec  1 05:16:38 np0005540825 systemd-machined[216307]: Machine qemu-1-instance-00000001 terminated.
Dec  1 05:16:38 np0005540825 neutron-haproxy-ovnmeta-8c466ba6-3850-4dac-846e-cf97ed839b53[262829]: [NOTICE]   (262833) : haproxy version is 2.8.14-c23fe91
Dec  1 05:16:38 np0005540825 neutron-haproxy-ovnmeta-8c466ba6-3850-4dac-846e-cf97ed839b53[262829]: [NOTICE]   (262833) : path to executable is /usr/sbin/haproxy
Dec  1 05:16:38 np0005540825 neutron-haproxy-ovnmeta-8c466ba6-3850-4dac-846e-cf97ed839b53[262829]: [WARNING]  (262833) : Exiting Master process...
Dec  1 05:16:38 np0005540825 neutron-haproxy-ovnmeta-8c466ba6-3850-4dac-846e-cf97ed839b53[262829]: [WARNING]  (262833) : Exiting Master process...
Dec  1 05:16:38 np0005540825 neutron-haproxy-ovnmeta-8c466ba6-3850-4dac-846e-cf97ed839b53[262829]: [ALERT]    (262833) : Current worker (262835) exited with code 143 (Terminated)
Dec  1 05:16:38 np0005540825 neutron-haproxy-ovnmeta-8c466ba6-3850-4dac-846e-cf97ed839b53[262829]: [WARNING]  (262833) : All workers exited. Exiting... (0)
Dec  1 05:16:38 np0005540825 systemd[1]: libpod-a8de317abab3b907b179154214398ee2bcc6f956c79762c847194cc6890af8fc.scope: Deactivated successfully.
Dec  1 05:16:38 np0005540825 nova_compute[256151]: 2025-12-01 10:16:38.880 256155 INFO nova.virt.libvirt.driver [-] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Instance destroyed successfully.#033[00m
Dec  1 05:16:38 np0005540825 nova_compute[256151]: 2025-12-01 10:16:38.881 256155 DEBUG nova.objects.instance [None req-ce406090-82ff-471b-9cba-2f9c9b63bfee 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lazy-loading 'resources' on Instance uuid 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 05:16:38 np0005540825 podman[264761]: 2025-12-01 10:16:38.890862638 +0000 UTC m=+0.065449781 container died a8de317abab3b907b179154214398ee2bcc6f956c79762c847194cc6890af8fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c466ba6-3850-4dac-846e-cf97ed839b53, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  1 05:16:38 np0005540825 nova_compute[256151]: 2025-12-01 10:16:38.899 256155 DEBUG nova.virt.libvirt.vif [None req-ce406090-82ff-471b-9cba-2f9c9b63bfee 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T10:14:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1016315753',display_name='tempest-TestNetworkBasicOps-server-1016315753',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1016315753',id=1,image_ref='8f75d6de-6ce0-44e1-b417-d0111424475b',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGJyiZhD79g//PFP56TQaBy3YxEM3LBaA7EcVZ7Tdz/6gMAGTnZhgjP7lR7qjlPZM7TMPAJaWDsBbZE4mpPdHpXPHvYJjJulnETj6bgJEdlnDSD6q5Pc5uIGO8IM6SZd+A==',key_name='tempest-TestNetworkBasicOps-370696341',keypairs=<?>,launch_index=0,launched_at=2025-12-01T10:15:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9f6be4e572624210b91193c011607c08',ramdisk_id='',reservation_id='r-a9okexes',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8f75d6de-6ce0-44e1-b417-d0111424475b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1248115384',owner_user_name='tempest-TestNetworkBasicOps-1248115384-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T10:15:16Z,user_data=None,user_id='5b56a238daf0445798410e51caada0ff',uuid=60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f76722ac-216e-4706-9ca6-804d90bbbc7f", "address": "fa:16:3e:64:86:43", "network": {"id": "8c466ba6-3850-4dac-846e-cf97ed839b53", "bridge": "br-int", "label": "tempest-network-smoke--1786448833", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf76722ac-21", "ovs_interfaceid": "f76722ac-216e-4706-9ca6-804d90bbbc7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 05:16:38 np0005540825 nova_compute[256151]: 2025-12-01 10:16:38.900 256155 DEBUG nova.network.os_vif_util [None req-ce406090-82ff-471b-9cba-2f9c9b63bfee 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Converting VIF {"id": "f76722ac-216e-4706-9ca6-804d90bbbc7f", "address": "fa:16:3e:64:86:43", "network": {"id": "8c466ba6-3850-4dac-846e-cf97ed839b53", "bridge": "br-int", "label": "tempest-network-smoke--1786448833", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf76722ac-21", "ovs_interfaceid": "f76722ac-216e-4706-9ca6-804d90bbbc7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 05:16:38 np0005540825 nova_compute[256151]: 2025-12-01 10:16:38.902 256155 DEBUG nova.network.os_vif_util [None req-ce406090-82ff-471b-9cba-2f9c9b63bfee 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:64:86:43,bridge_name='br-int',has_traffic_filtering=True,id=f76722ac-216e-4706-9ca6-804d90bbbc7f,network=Network(8c466ba6-3850-4dac-846e-cf97ed839b53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf76722ac-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 05:16:38 np0005540825 nova_compute[256151]: 2025-12-01 10:16:38.903 256155 DEBUG os_vif [None req-ce406090-82ff-471b-9cba-2f9c9b63bfee 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:64:86:43,bridge_name='br-int',has_traffic_filtering=True,id=f76722ac-216e-4706-9ca6-804d90bbbc7f,network=Network(8c466ba6-3850-4dac-846e-cf97ed839b53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf76722ac-21') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 05:16:38 np0005540825 nova_compute[256151]: 2025-12-01 10:16:38.906 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:16:38 np0005540825 nova_compute[256151]: 2025-12-01 10:16:38.907 256155 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf76722ac-21, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:16:38 np0005540825 nova_compute[256151]: 2025-12-01 10:16:38.921 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:16:38 np0005540825 nova_compute[256151]: 2025-12-01 10:16:38.924 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 05:16:38 np0005540825 nova_compute[256151]: 2025-12-01 10:16:38.929 256155 INFO os_vif [None req-ce406090-82ff-471b-9cba-2f9c9b63bfee 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:64:86:43,bridge_name='br-int',has_traffic_filtering=True,id=f76722ac-216e-4706-9ca6-804d90bbbc7f,network=Network(8c466ba6-3850-4dac-846e-cf97ed839b53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf76722ac-21')#033[00m
Dec  1 05:16:38 np0005540825 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a8de317abab3b907b179154214398ee2bcc6f956c79762c847194cc6890af8fc-userdata-shm.mount: Deactivated successfully.
Dec  1 05:16:38 np0005540825 systemd[1]: var-lib-containers-storage-overlay-c583df219ed25fc32e87c072f5ce968ee8854ed2108a4c80ec9ecdaddac8339f-merged.mount: Deactivated successfully.
Dec  1 05:16:38 np0005540825 podman[264761]: 2025-12-01 10:16:38.959643897 +0000 UTC m=+0.134231030 container cleanup a8de317abab3b907b179154214398ee2bcc6f956c79762c847194cc6890af8fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c466ba6-3850-4dac-846e-cf97ed839b53, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125)
Dec  1 05:16:38 np0005540825 systemd[1]: libpod-conmon-a8de317abab3b907b179154214398ee2bcc6f956c79762c847194cc6890af8fc.scope: Deactivated successfully.
Dec  1 05:16:39 np0005540825 podman[264815]: 2025-12-01 10:16:39.063513675 +0000 UTC m=+0.065434470 container remove a8de317abab3b907b179154214398ee2bcc6f956c79762c847194cc6890af8fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c466ba6-3850-4dac-846e-cf97ed839b53, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  1 05:16:39 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:16:39.075 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[1bb493e7-9734-498b-892e-53a6481ad2be]: (4, ('Mon Dec  1 10:16:38 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8c466ba6-3850-4dac-846e-cf97ed839b53 (a8de317abab3b907b179154214398ee2bcc6f956c79762c847194cc6890af8fc)\na8de317abab3b907b179154214398ee2bcc6f956c79762c847194cc6890af8fc\nMon Dec  1 10:16:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8c466ba6-3850-4dac-846e-cf97ed839b53 (a8de317abab3b907b179154214398ee2bcc6f956c79762c847194cc6890af8fc)\na8de317abab3b907b179154214398ee2bcc6f956c79762c847194cc6890af8fc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:16:39 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:16:39.078 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[c6c25093-a0cb-4a0d-a181-26a9328deb0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:16:39 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:16:39.079 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c466ba6-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:16:39 np0005540825 nova_compute[256151]: 2025-12-01 10:16:39.082 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:16:39 np0005540825 kernel: tap8c466ba6-30: left promiscuous mode
Dec  1 05:16:39 np0005540825 nova_compute[256151]: 2025-12-01 10:16:39.099 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:16:39 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:16:39.102 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[6786a37e-791c-4a9f-ae9b-3fabf49b09af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:16:39 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:16:39.128 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[5aafcc4b-3cb2-40cd-8c23-53a8f620dc42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:16:39 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:16:39.130 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[80b7cf4b-9a6f-4c12-8b7b-24e42e90876b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:16:39 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:16:39.149 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[7875bec0-9b30-44f0-ae5c-3cdd63250eff]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 398792, 'reachable_time': 36126, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264833, 'error': None, 'target': 'ovnmeta-8c466ba6-3850-4dac-846e-cf97ed839b53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:16:39 np0005540825 systemd[1]: run-netns-ovnmeta\x2d8c466ba6\x2d3850\x2d4dac\x2d846e\x2dcf97ed839b53.mount: Deactivated successfully.
Dec  1 05:16:39 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:16:39.170 163408 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8c466ba6-3850-4dac-846e-cf97ed839b53 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  1 05:16:39 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:16:39.171 163408 DEBUG oslo.privsep.daemon [-] privsep: reply[a762b271-05bd-4581-954a-1329efc546ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:16:39 np0005540825 nova_compute[256151]: 2025-12-01 10:16:39.410 256155 INFO nova.virt.libvirt.driver [None req-ce406090-82ff-471b-9cba-2f9c9b63bfee 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Deleting instance files /var/lib/nova/instances/60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20_del#033[00m
Dec  1 05:16:39 np0005540825 nova_compute[256151]: 2025-12-01 10:16:39.411 256155 INFO nova.virt.libvirt.driver [None req-ce406090-82ff-471b-9cba-2f9c9b63bfee 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Deletion of /var/lib/nova/instances/60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20_del complete#033[00m
Dec  1 05:16:39 np0005540825 nova_compute[256151]: 2025-12-01 10:16:39.459 256155 DEBUG nova.virt.libvirt.host [None req-ce406090-82ff-471b-9cba-2f9c9b63bfee 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Dec  1 05:16:39 np0005540825 nova_compute[256151]: 2025-12-01 10:16:39.460 256155 INFO nova.virt.libvirt.host [None req-ce406090-82ff-471b-9cba-2f9c9b63bfee 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] UEFI support detected#033[00m
Dec  1 05:16:39 np0005540825 nova_compute[256151]: 2025-12-01 10:16:39.462 256155 INFO nova.compute.manager [None req-ce406090-82ff-471b-9cba-2f9c9b63bfee 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Took 0.83 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 05:16:39 np0005540825 nova_compute[256151]: 2025-12-01 10:16:39.463 256155 DEBUG oslo.service.loopingcall [None req-ce406090-82ff-471b-9cba-2f9c9b63bfee 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 05:16:39 np0005540825 nova_compute[256151]: 2025-12-01 10:16:39.463 256155 DEBUG nova.compute.manager [-] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 05:16:39 np0005540825 nova_compute[256151]: 2025-12-01 10:16:39.463 256155 DEBUG nova.network.neutron [-] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_10:16:39
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['backups', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'images', '.mgr', 'default.rgw.meta', 'vms', 'default.rgw.log', 'default.rgw.control', '.nfs', 'cephfs.cephfs.meta']
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 05:16:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:16:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:16:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:16:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:16:39.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v781: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 22 KiB/s wr, 29 op/s
Dec  1 05:16:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:39 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc0001ae0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007606720469492739 of space, bias 1.0, pg target 0.22820161408478218 quantized to 32 (current 32)
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:16:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:16:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:16:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:16:39.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:16:40 np0005540825 nova_compute[256151]: 2025-12-01 10:16:40.316 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:16:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:16:40 np0005540825 nova_compute[256151]: 2025-12-01 10:16:40.488 256155 DEBUG nova.compute.manager [req-1e91900f-70a7-47d2-a488-39d7c19e347b req-3a22765b-87e5-4a3e-b396-2a03386be247 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Received event network-vif-unplugged-f76722ac-216e-4706-9ca6-804d90bbbc7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:16:40 np0005540825 nova_compute[256151]: 2025-12-01 10:16:40.488 256155 DEBUG oslo_concurrency.lockutils [req-1e91900f-70a7-47d2-a488-39d7c19e347b req-3a22765b-87e5-4a3e-b396-2a03386be247 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:16:40 np0005540825 nova_compute[256151]: 2025-12-01 10:16:40.489 256155 DEBUG oslo_concurrency.lockutils [req-1e91900f-70a7-47d2-a488-39d7c19e347b req-3a22765b-87e5-4a3e-b396-2a03386be247 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:16:40 np0005540825 nova_compute[256151]: 2025-12-01 10:16:40.489 256155 DEBUG oslo_concurrency.lockutils [req-1e91900f-70a7-47d2-a488-39d7c19e347b req-3a22765b-87e5-4a3e-b396-2a03386be247 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:16:40 np0005540825 nova_compute[256151]: 2025-12-01 10:16:40.489 256155 DEBUG nova.compute.manager [req-1e91900f-70a7-47d2-a488-39d7c19e347b req-3a22765b-87e5-4a3e-b396-2a03386be247 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] No waiting events found dispatching network-vif-unplugged-f76722ac-216e-4706-9ca6-804d90bbbc7f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 05:16:40 np0005540825 nova_compute[256151]: 2025-12-01 10:16:40.490 256155 DEBUG nova.compute.manager [req-1e91900f-70a7-47d2-a488-39d7c19e347b req-3a22765b-87e5-4a3e-b396-2a03386be247 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Received event network-vif-unplugged-f76722ac-216e-4706-9ca6-804d90bbbc7f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  1 05:16:40 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:40 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca4003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:40 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:40 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcb4002e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:41 np0005540825 nova_compute[256151]: 2025-12-01 10:16:41.038 256155 DEBUG nova.network.neutron [-] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 05:16:41 np0005540825 nova_compute[256151]: 2025-12-01 10:16:41.067 256155 INFO nova.compute.manager [-] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Took 1.60 seconds to deallocate network for instance.#033[00m
Dec  1 05:16:41 np0005540825 nova_compute[256151]: 2025-12-01 10:16:41.142 256155 DEBUG oslo_concurrency.lockutils [None req-ce406090-82ff-471b-9cba-2f9c9b63bfee 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:16:41 np0005540825 nova_compute[256151]: 2025-12-01 10:16:41.143 256155 DEBUG oslo_concurrency.lockutils [None req-ce406090-82ff-471b-9cba-2f9c9b63bfee 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:16:41 np0005540825 nova_compute[256151]: 2025-12-01 10:16:41.202 256155 DEBUG oslo_concurrency.processutils [None req-ce406090-82ff-471b-9cba-2f9c9b63bfee 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:16:41 np0005540825 nova_compute[256151]: 2025-12-01 10:16:41.243 256155 DEBUG nova.network.neutron [req-c4c3a32a-2830-4236-b089-9fddbeca42e4 req-089a769a-fd1e-42bf-ac35-3a8d4faa25c4 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Updated VIF entry in instance network info cache for port f76722ac-216e-4706-9ca6-804d90bbbc7f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 05:16:41 np0005540825 nova_compute[256151]: 2025-12-01 10:16:41.245 256155 DEBUG nova.network.neutron [req-c4c3a32a-2830-4236-b089-9fddbeca42e4 req-089a769a-fd1e-42bf-ac35-3a8d4faa25c4 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Updating instance_info_cache with network_info: [{"id": "f76722ac-216e-4706-9ca6-804d90bbbc7f", "address": "fa:16:3e:64:86:43", "network": {"id": "8c466ba6-3850-4dac-846e-cf97ed839b53", "bridge": "br-int", "label": "tempest-network-smoke--1786448833", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf76722ac-21", "ovs_interfaceid": "f76722ac-216e-4706-9ca6-804d90bbbc7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 05:16:41 np0005540825 nova_compute[256151]: 2025-12-01 10:16:41.284 256155 DEBUG oslo_concurrency.lockutils [req-c4c3a32a-2830-4236-b089-9fddbeca42e4 req-089a769a-fd1e-42bf-ac35-3a8d4faa25c4 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Releasing lock "refresh_cache-60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 05:16:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:16:41] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Dec  1 05:16:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:16:41] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Dec  1 05:16:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:16:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:16:41.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:16:41 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:16:41 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3976190664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:16:41 np0005540825 nova_compute[256151]: 2025-12-01 10:16:41.727 256155 DEBUG oslo_concurrency.processutils [None req-ce406090-82ff-471b-9cba-2f9c9b63bfee 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:16:41 np0005540825 nova_compute[256151]: 2025-12-01 10:16:41.733 256155 DEBUG nova.compute.provider_tree [None req-ce406090-82ff-471b-9cba-2f9c9b63bfee 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Inventory has not changed in ProviderTree for provider: 5efe20fe-1981-4bd9-8786-d9fddc89a5ae update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 05:16:41 np0005540825 nova_compute[256151]: 2025-12-01 10:16:41.746 256155 DEBUG nova.scheduler.client.report [None req-ce406090-82ff-471b-9cba-2f9c9b63bfee 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Inventory has not changed for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 05:16:41 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v782: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 24 KiB/s wr, 57 op/s
Dec  1 05:16:41 np0005540825 nova_compute[256151]: 2025-12-01 10:16:41.764 256155 DEBUG oslo_concurrency.lockutils [None req-ce406090-82ff-471b-9cba-2f9c9b63bfee 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:16:41 np0005540825 nova_compute[256151]: 2025-12-01 10:16:41.786 256155 INFO nova.scheduler.client.report [None req-ce406090-82ff-471b-9cba-2f9c9b63bfee 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Deleted allocations for instance 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20#033[00m
Dec  1 05:16:41 np0005540825 nova_compute[256151]: 2025-12-01 10:16:41.881 256155 DEBUG oslo_concurrency.lockutils [None req-ce406090-82ff-471b-9cba-2f9c9b63bfee 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:16:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:41 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc8004b80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:16:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:16:41.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:16:42 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:42 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc00037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:42 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:42 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca4003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:42 np0005540825 nova_compute[256151]: 2025-12-01 10:16:42.581 256155 DEBUG nova.compute.manager [req-1b9dd633-e312-4e5a-86de-b063189a0e22 req-3e8ba6c4-c764-4d61-b967-3d0e917cc5a0 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Received event network-vif-plugged-f76722ac-216e-4706-9ca6-804d90bbbc7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:16:42 np0005540825 nova_compute[256151]: 2025-12-01 10:16:42.582 256155 DEBUG oslo_concurrency.lockutils [req-1b9dd633-e312-4e5a-86de-b063189a0e22 req-3e8ba6c4-c764-4d61-b967-3d0e917cc5a0 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:16:42 np0005540825 nova_compute[256151]: 2025-12-01 10:16:42.582 256155 DEBUG oslo_concurrency.lockutils [req-1b9dd633-e312-4e5a-86de-b063189a0e22 req-3e8ba6c4-c764-4d61-b967-3d0e917cc5a0 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:16:42 np0005540825 nova_compute[256151]: 2025-12-01 10:16:42.582 256155 DEBUG oslo_concurrency.lockutils [req-1b9dd633-e312-4e5a-86de-b063189a0e22 req-3e8ba6c4-c764-4d61-b967-3d0e917cc5a0 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:16:42 np0005540825 nova_compute[256151]: 2025-12-01 10:16:42.582 256155 DEBUG nova.compute.manager [req-1b9dd633-e312-4e5a-86de-b063189a0e22 req-3e8ba6c4-c764-4d61-b967-3d0e917cc5a0 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] No waiting events found dispatching network-vif-plugged-f76722ac-216e-4706-9ca6-804d90bbbc7f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 05:16:42 np0005540825 nova_compute[256151]: 2025-12-01 10:16:42.583 256155 WARNING nova.compute.manager [req-1b9dd633-e312-4e5a-86de-b063189a0e22 req-3e8ba6c4-c764-4d61-b967-3d0e917cc5a0 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Received unexpected event network-vif-plugged-f76722ac-216e-4706-9ca6-804d90bbbc7f for instance with vm_state deleted and task_state None.#033[00m
Dec  1 05:16:42 np0005540825 nova_compute[256151]: 2025-12-01 10:16:42.583 256155 DEBUG nova.compute.manager [req-1b9dd633-e312-4e5a-86de-b063189a0e22 req-3e8ba6c4-c764-4d61-b967-3d0e917cc5a0 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Received event network-vif-deleted-f76722ac-216e-4706-9ca6-804d90bbbc7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:16:42 np0005540825 nova_compute[256151]: 2025-12-01 10:16:42.583 256155 INFO nova.compute.manager [req-1b9dd633-e312-4e5a-86de-b063189a0e22 req-3e8ba6c4-c764-4d61-b967-3d0e917cc5a0 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Neutron deleted interface f76722ac-216e-4706-9ca6-804d90bbbc7f; detaching it from the instance and deleting it from the info cache#033[00m
Dec  1 05:16:42 np0005540825 nova_compute[256151]: 2025-12-01 10:16:42.583 256155 DEBUG nova.network.neutron [req-1b9dd633-e312-4e5a-86de-b063189a0e22 req-3e8ba6c4-c764-4d61-b967-3d0e917cc5a0 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Dec  1 05:16:42 np0005540825 nova_compute[256151]: 2025-12-01 10:16:42.586 256155 DEBUG nova.compute.manager [req-1b9dd633-e312-4e5a-86de-b063189a0e22 req-3e8ba6c4-c764-4d61-b967-3d0e917cc5a0 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Detach interface failed, port_id=f76722ac-216e-4706-9ca6-804d90bbbc7f, reason: Instance 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec  1 05:16:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:16:43.641Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:16:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:16:43.641Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:16:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:16:43.641Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:16:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:16:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:16:43.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:16:43 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v783: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 10 KiB/s wr, 29 op/s
Dec  1 05:16:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:43 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcb4002e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:43 np0005540825 nova_compute[256151]: 2025-12-01 10:16:43.922 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:16:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:16:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:16:43.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:16:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:44 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcb4002e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:44 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc00037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:45 np0005540825 nova_compute[256151]: 2025-12-01 10:16:45.317 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:16:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:16:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:16:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:16:45.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:16:45 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v784: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 10 KiB/s wr, 29 op/s
Dec  1 05:16:45 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:45 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca4003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:16:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:16:45.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:16:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:46 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcb4002e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:46 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:46 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc8004b80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:47 np0005540825 nova_compute[256151]: 2025-12-01 10:16:47.005 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:16:47 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:16:47.006 163291 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '36:10:da', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '4e:5c:35:98:90:37'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 05:16:47 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:16:47.007 163291 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 05:16:47 np0005540825 nova_compute[256151]: 2025-12-01 10:16:47.191 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:16:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:16:47.199Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:16:47 np0005540825 nova_compute[256151]: 2025-12-01 10:16:47.277 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:16:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:16:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:16:47.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:16:47 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v785: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec  1 05:16:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:47 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc00037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:16:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:16:47.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:16:48 np0005540825 nova_compute[256151]: 2025-12-01 10:16:48.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:16:48 np0005540825 nova_compute[256151]: 2025-12-01 10:16:48.028 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 05:16:48 np0005540825 nova_compute[256151]: 2025-12-01 10:16:48.028 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 05:16:48 np0005540825 nova_compute[256151]: 2025-12-01 10:16:48.048 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 05:16:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:48 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca4003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:48 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcb4004280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:48 np0005540825 nova_compute[256151]: 2025-12-01 10:16:48.924 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:16:49 np0005540825 nova_compute[256151]: 2025-12-01 10:16:49.043 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:16:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:16:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:16:49.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:16:49 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v786: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec  1 05:16:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:49 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc8004d20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:16:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:16:49.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:16:50 np0005540825 nova_compute[256151]: 2025-12-01 10:16:50.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:16:50 np0005540825 nova_compute[256151]: 2025-12-01 10:16:50.028 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:16:50 np0005540825 nova_compute[256151]: 2025-12-01 10:16:50.319 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:16:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:16:50 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:50 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc00037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:50 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:50 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca4003eb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:51 np0005540825 nova_compute[256151]: 2025-12-01 10:16:51.026 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:16:51 np0005540825 podman[264896]: 2025-12-01 10:16:51.234122526 +0000 UTC m=+0.089501121 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 05:16:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:16:51] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Dec  1 05:16:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:16:51] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Dec  1 05:16:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:16:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:16:51.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:16:51 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v787: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec  1 05:16:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:51 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcb4004280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:16:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:16:51.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:16:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:52 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc8004d40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:52 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc00037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:52 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  1 05:16:52 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 5572 writes, 25K keys, 5572 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.03 MB/s#012Cumulative WAL: 5572 writes, 5572 syncs, 1.00 writes per sync, written: 0.05 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1556 writes, 6851 keys, 1556 commit groups, 1.0 writes per commit group, ingest: 11.60 MB, 0.02 MB/s#012Interval WAL: 1556 writes, 1556 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     46.7      0.83              0.13        14    0.059       0      0       0.0       0.0#012  L6      1/0   12.65 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.3     75.8     65.6      2.55              0.57        13    0.196     67K   6795       0.0       0.0#012 Sum      1/0   12.65 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   5.3     57.2     60.9      3.38              0.70        27    0.125     67K   6795       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.6    129.0    130.6      0.68              0.32        12    0.057     34K   3122       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0     75.8     65.6      2.55              0.57        13    0.196     67K   6795       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     46.9      0.82              0.13        13    0.063       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.038, interval 0.012#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.20 GB write, 0.11 MB/s write, 0.19 GB read, 0.11 MB/s read, 3.4 seconds#012Interval compaction: 0.09 GB write, 0.15 MB/s write, 0.09 GB read, 0.15 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563970129350#2 capacity: 304.00 MB usage: 13.70 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000121 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(741,13.16 MB,4.32886%) FilterBlock(28,202.30 KB,0.0649854%) IndexBlock(28,350.45 KB,0.112579%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  1 05:16:53 np0005540825 nova_compute[256151]: 2025-12-01 10:16:53.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:16:53 np0005540825 nova_compute[256151]: 2025-12-01 10:16:53.028 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:16:53 np0005540825 nova_compute[256151]: 2025-12-01 10:16:53.028 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 05:16:53 np0005540825 nova_compute[256151]: 2025-12-01 10:16:53.029 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:16:53 np0005540825 nova_compute[256151]: 2025-12-01 10:16:53.054 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:16:53 np0005540825 nova_compute[256151]: 2025-12-01 10:16:53.055 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:16:53 np0005540825 nova_compute[256151]: 2025-12-01 10:16:53.055 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:16:53 np0005540825 nova_compute[256151]: 2025-12-01 10:16:53.055 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 05:16:53 np0005540825 nova_compute[256151]: 2025-12-01 10:16:53.056 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:16:53 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:16:53 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2764423619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:16:53 np0005540825 nova_compute[256151]: 2025-12-01 10:16:53.593 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:16:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:16:53.643Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:16:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:16:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:16:53.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:16:53 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v788: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:16:53 np0005540825 nova_compute[256151]: 2025-12-01 10:16:53.811 256155 WARNING nova.virt.libvirt.driver [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 05:16:53 np0005540825 nova_compute[256151]: 2025-12-01 10:16:53.813 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4612MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 05:16:53 np0005540825 nova_compute[256151]: 2025-12-01 10:16:53.813 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:16:53 np0005540825 nova_compute[256151]: 2025-12-01 10:16:53.813 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:16:53 np0005540825 nova_compute[256151]: 2025-12-01 10:16:53.877 256155 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764584198.8767266, 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 05:16:53 np0005540825 nova_compute[256151]: 2025-12-01 10:16:53.878 256155 INFO nova.compute.manager [-] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] VM Stopped (Lifecycle Event)#033[00m
Dec  1 05:16:53 np0005540825 nova_compute[256151]: 2025-12-01 10:16:53.887 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 05:16:53 np0005540825 nova_compute[256151]: 2025-12-01 10:16:53.887 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 05:16:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:53 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca4003ed0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:53 np0005540825 nova_compute[256151]: 2025-12-01 10:16:53.902 256155 DEBUG nova.compute.manager [None req-88bda5f6-d389-4710-86f9-2116cf8d1ea1 - - - - - -] [instance: 60fb4ca8-eb2d-43e9-b76a-0ff7a7fcae20] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 05:16:53 np0005540825 nova_compute[256151]: 2025-12-01 10:16:53.907 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:16:53 np0005540825 nova_compute[256151]: 2025-12-01 10:16:53.936 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:16:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:16:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:16:53.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:16:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:16:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2087643151' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:16:54 np0005540825 nova_compute[256151]: 2025-12-01 10:16:54.447 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:16:54 np0005540825 nova_compute[256151]: 2025-12-01 10:16:54.455 256155 DEBUG nova.compute.provider_tree [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed in ProviderTree for provider: 5efe20fe-1981-4bd9-8786-d9fddc89a5ae update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 05:16:54 np0005540825 nova_compute[256151]: 2025-12-01 10:16:54.471 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 05:16:54 np0005540825 nova_compute[256151]: 2025-12-01 10:16:54.490 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 05:16:54 np0005540825 nova_compute[256151]: 2025-12-01 10:16:54.490 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:16:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:16:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:16:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:54 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcb4004280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:54 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc8004d60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:16:55 np0005540825 nova_compute[256151]: 2025-12-01 10:16:55.357 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:16:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:16:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:16:55.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:16:55 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v789: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:16:55 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:55 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc00037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:16:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:16:55.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:16:56 np0005540825 podman[264968]: 2025-12-01 10:16:56.25590932 +0000 UTC m=+0.115246676 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  1 05:16:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:56 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca4003ef0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:56 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:56 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca8001230 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:57 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:16:57.008 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4d9738cf-2abf-48e2-9303-677669784912, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:16:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:16:57.201Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:16:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:16:57.201Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:16:57 np0005540825 nova_compute[256151]: 2025-12-01 10:16:57.489 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:16:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:16:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:16:57.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:16:57 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v790: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  1 05:16:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:57 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc8004df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:16:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:16:57.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:16:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:58 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc00037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:58 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efc98002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:58 np0005540825 nova_compute[256151]: 2025-12-01 10:16:58.940 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:16:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:16:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:16:59.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:16:59 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v791: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:16:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:16:59 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca8001230 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:16:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:16:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:16:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:16:59.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:17:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:17:00 np0005540825 nova_compute[256151]: 2025-12-01 10:17:00.399 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:17:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:17:00 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc8004df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:17:00 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:17:00 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc00037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:17:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:17:01] "GET /metrics HTTP/1.1" 200 48523 "" "Prometheus/2.51.0"
Dec  1 05:17:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:17:01] "GET /metrics HTTP/1.1" 200 48523 "" "Prometheus/2.51.0"
Dec  1 05:17:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:17:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:17:01.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:17:01 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v792: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec  1 05:17:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:17:01 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efc98002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:17:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:17:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:17:01.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:17:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:17:02 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca8001230 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:17:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:17:02 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc8004df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:17:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:17:03.644Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:17:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:17:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:17:03.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:17:03 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v793: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  1 05:17:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:17:03 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc00037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:17:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:17:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:17:03.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:17:03 np0005540825 nova_compute[256151]: 2025-12-01 10:17:03.978 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:17:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:17:04.573 163291 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:17:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:17:04.573 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:17:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:17:04.574 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:17:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:17:04 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efc98002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:17:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:17:04 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca8001230 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:17:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:17:05 np0005540825 nova_compute[256151]: 2025-12-01 10:17:05.400 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:17:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:17:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:17:05.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:17:05 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v794: 353 pgs: 353 active+clean; 75 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 7.8 KiB/s rd, 1.2 MiB/s wr, 15 op/s
Dec  1 05:17:05 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:17:05 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc8004df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:17:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:17:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:17:05.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:17:06 np0005540825 podman[265025]: 2025-12-01 10:17:06.291956427 +0000 UTC m=+0.152120543 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  1 05:17:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:17:06 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc00037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:17:06 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:17:06 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efc98002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:17:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:17:07.202Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:17:07 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:17:07 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:17:07 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 05:17:07 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:17:07 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 05:17:07 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:17:07 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 05:17:07 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:17:07 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 05:17:07 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 05:17:07 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 05:17:07 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:17:07 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:17:07 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:17:07 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:07 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:17:07 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:17:07.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:17:07 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v795: 353 pgs: 353 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  1 05:17:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:17:07 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca8001230 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:17:07 np0005540825 podman[265227]: 2025-12-01 10:17:07.957415266 +0000 UTC m=+0.061284027 container create d5d84c2e09eacbae8171d6bb834e323f5c054176d037ad7b66214eefda173e7c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_mclean, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  1 05:17:07 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:17:07 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:17:07 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:17:07 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:17:07 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:07 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:17:07 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:17:07.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:17:08 np0005540825 systemd[1]: Started libpod-conmon-d5d84c2e09eacbae8171d6bb834e323f5c054176d037ad7b66214eefda173e7c.scope.
Dec  1 05:17:08 np0005540825 podman[265227]: 2025-12-01 10:17:07.936447599 +0000 UTC m=+0.040316320 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:17:08 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:17:08 np0005540825 podman[265227]: 2025-12-01 10:17:08.069347302 +0000 UTC m=+0.173216063 container init d5d84c2e09eacbae8171d6bb834e323f5c054176d037ad7b66214eefda173e7c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_mclean, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  1 05:17:08 np0005540825 podman[265227]: 2025-12-01 10:17:08.081895031 +0000 UTC m=+0.185763782 container start d5d84c2e09eacbae8171d6bb834e323f5c054176d037ad7b66214eefda173e7c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_mclean, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  1 05:17:08 np0005540825 podman[265227]: 2025-12-01 10:17:08.086102405 +0000 UTC m=+0.189971226 container attach d5d84c2e09eacbae8171d6bb834e323f5c054176d037ad7b66214eefda173e7c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_mclean, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  1 05:17:08 np0005540825 inspiring_mclean[265243]: 167 167
Dec  1 05:17:08 np0005540825 systemd[1]: libpod-d5d84c2e09eacbae8171d6bb834e323f5c054176d037ad7b66214eefda173e7c.scope: Deactivated successfully.
Dec  1 05:17:08 np0005540825 podman[265227]: 2025-12-01 10:17:08.092246031 +0000 UTC m=+0.196114812 container died d5d84c2e09eacbae8171d6bb834e323f5c054176d037ad7b66214eefda173e7c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_mclean, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:17:08 np0005540825 systemd[1]: var-lib-containers-storage-overlay-402ed4514e822c7af311bee62cea61527a105df1f26dc0ccac9a6601bf8ee692-merged.mount: Deactivated successfully.
Dec  1 05:17:08 np0005540825 podman[265227]: 2025-12-01 10:17:08.138600824 +0000 UTC m=+0.242469585 container remove d5d84c2e09eacbae8171d6bb834e323f5c054176d037ad7b66214eefda173e7c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_mclean, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:17:08 np0005540825 systemd[1]: libpod-conmon-d5d84c2e09eacbae8171d6bb834e323f5c054176d037ad7b66214eefda173e7c.scope: Deactivated successfully.
Dec  1 05:17:08 np0005540825 podman[265266]: 2025-12-01 10:17:08.380063581 +0000 UTC m=+0.051640447 container create 16af3d4b6a90f2f09c73b7315f1d7b47d7d8426a7ee6cc0bd9aa530037f2a385 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_chebyshev, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:17:08 np0005540825 systemd[1]: Started libpod-conmon-16af3d4b6a90f2f09c73b7315f1d7b47d7d8426a7ee6cc0bd9aa530037f2a385.scope.
Dec  1 05:17:08 np0005540825 podman[265266]: 2025-12-01 10:17:08.359765272 +0000 UTC m=+0.031342118 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:17:08 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:17:08 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395c5bcdaaa487afd385d308ac18bceb2e688b60ba538d8e72d43d7fc9304a87/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:17:08 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395c5bcdaaa487afd385d308ac18bceb2e688b60ba538d8e72d43d7fc9304a87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:17:08 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395c5bcdaaa487afd385d308ac18bceb2e688b60ba538d8e72d43d7fc9304a87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:17:08 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395c5bcdaaa487afd385d308ac18bceb2e688b60ba538d8e72d43d7fc9304a87/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:17:08 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395c5bcdaaa487afd385d308ac18bceb2e688b60ba538d8e72d43d7fc9304a87/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:17:08 np0005540825 podman[265266]: 2025-12-01 10:17:08.503653112 +0000 UTC m=+0.175230018 container init 16af3d4b6a90f2f09c73b7315f1d7b47d7d8426a7ee6cc0bd9aa530037f2a385 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_chebyshev, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec  1 05:17:08 np0005540825 podman[265266]: 2025-12-01 10:17:08.514064293 +0000 UTC m=+0.185641119 container start 16af3d4b6a90f2f09c73b7315f1d7b47d7d8426a7ee6cc0bd9aa530037f2a385 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_chebyshev, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec  1 05:17:08 np0005540825 podman[265266]: 2025-12-01 10:17:08.517589549 +0000 UTC m=+0.189166415 container attach 16af3d4b6a90f2f09c73b7315f1d7b47d7d8426a7ee6cc0bd9aa530037f2a385 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_chebyshev, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:17:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/101708 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 05:17:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:17:08 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc8004df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:17:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:17:08 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc8004df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:17:08 np0005540825 hungry_chebyshev[265283]: --> passed data devices: 0 physical, 1 LVM
Dec  1 05:17:08 np0005540825 hungry_chebyshev[265283]: --> All data devices are unavailable
Dec  1 05:17:08 np0005540825 systemd[1]: libpod-16af3d4b6a90f2f09c73b7315f1d7b47d7d8426a7ee6cc0bd9aa530037f2a385.scope: Deactivated successfully.
Dec  1 05:17:08 np0005540825 podman[265266]: 2025-12-01 10:17:08.92346525 +0000 UTC m=+0.595042116 container died 16af3d4b6a90f2f09c73b7315f1d7b47d7d8426a7ee6cc0bd9aa530037f2a385 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  1 05:17:08 np0005540825 nova_compute[256151]: 2025-12-01 10:17:08.979 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:17:09 np0005540825 systemd[1]: var-lib-containers-storage-overlay-395c5bcdaaa487afd385d308ac18bceb2e688b60ba538d8e72d43d7fc9304a87-merged.mount: Deactivated successfully.
Dec  1 05:17:09 np0005540825 podman[265266]: 2025-12-01 10:17:09.254963601 +0000 UTC m=+0.926540467 container remove 16af3d4b6a90f2f09c73b7315f1d7b47d7d8426a7ee6cc0bd9aa530037f2a385 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_chebyshev, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  1 05:17:09 np0005540825 systemd[1]: libpod-conmon-16af3d4b6a90f2f09c73b7315f1d7b47d7d8426a7ee6cc0bd9aa530037f2a385.scope: Deactivated successfully.
Dec  1 05:17:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:17:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:17:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:17:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:17:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:17:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:17:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:17:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:17:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:17:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:17:09.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:17:09 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v796: 353 pgs: 353 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  1 05:17:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:17:09 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc8004df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:17:09 np0005540825 podman[265404]: 2025-12-01 10:17:09.936539486 +0000 UTC m=+0.078325429 container create e8b39e520b2e41bffba0c00c917e98f3cdac7741a66b5c13b71ea98abdf14640 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_chaum, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  1 05:17:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:17:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:17:09.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:17:09 np0005540825 podman[265404]: 2025-12-01 10:17:09.877081368 +0000 UTC m=+0.018867291 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:17:10 np0005540825 systemd[1]: Started libpod-conmon-e8b39e520b2e41bffba0c00c917e98f3cdac7741a66b5c13b71ea98abdf14640.scope.
Dec  1 05:17:10 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:17:10 np0005540825 podman[265404]: 2025-12-01 10:17:10.058900152 +0000 UTC m=+0.200686155 container init e8b39e520b2e41bffba0c00c917e98f3cdac7741a66b5c13b71ea98abdf14640 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec  1 05:17:10 np0005540825 podman[265404]: 2025-12-01 10:17:10.072816698 +0000 UTC m=+0.214602641 container start e8b39e520b2e41bffba0c00c917e98f3cdac7741a66b5c13b71ea98abdf14640 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_chaum, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  1 05:17:10 np0005540825 wizardly_chaum[265420]: 167 167
Dec  1 05:17:10 np0005540825 systemd[1]: libpod-e8b39e520b2e41bffba0c00c917e98f3cdac7741a66b5c13b71ea98abdf14640.scope: Deactivated successfully.
Dec  1 05:17:10 np0005540825 podman[265404]: 2025-12-01 10:17:10.162281467 +0000 UTC m=+0.304067400 container attach e8b39e520b2e41bffba0c00c917e98f3cdac7741a66b5c13b71ea98abdf14640 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_chaum, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  1 05:17:10 np0005540825 podman[265404]: 2025-12-01 10:17:10.163260743 +0000 UTC m=+0.305046686 container died e8b39e520b2e41bffba0c00c917e98f3cdac7741a66b5c13b71ea98abdf14640 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_chaum, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:17:10 np0005540825 systemd[1]: var-lib-containers-storage-overlay-595b9264bcd4741c6a3b1f0b6e2ba3959ffaeba5497ec2a529b22f2ce92e905d-merged.mount: Deactivated successfully.
Dec  1 05:17:10 np0005540825 podman[265404]: 2025-12-01 10:17:10.276973637 +0000 UTC m=+0.418759580 container remove e8b39e520b2e41bffba0c00c917e98f3cdac7741a66b5c13b71ea98abdf14640 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec  1 05:17:10 np0005540825 systemd[1]: libpod-conmon-e8b39e520b2e41bffba0c00c917e98f3cdac7741a66b5c13b71ea98abdf14640.scope: Deactivated successfully.
Dec  1 05:17:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:17:10 np0005540825 nova_compute[256151]: 2025-12-01 10:17:10.402 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:17:10 np0005540825 podman[265446]: 2025-12-01 10:17:10.500442478 +0000 UTC m=+0.056379875 container create 09dda1bcf9e1b18a43091f4e8d1348addd200c56c96349a9a8515b6a673e07f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_thompson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  1 05:17:10 np0005540825 systemd[1]: Started libpod-conmon-09dda1bcf9e1b18a43091f4e8d1348addd200c56c96349a9a8515b6a673e07f7.scope.
Dec  1 05:17:10 np0005540825 podman[265446]: 2025-12-01 10:17:10.470976431 +0000 UTC m=+0.026913898 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:17:10 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:17:10 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/676bece5eb741bed541c3e976d9c6a31455997594dfe3b2c2dfc6a253d97ee0e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:17:10 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/676bece5eb741bed541c3e976d9c6a31455997594dfe3b2c2dfc6a253d97ee0e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:17:10 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/676bece5eb741bed541c3e976d9c6a31455997594dfe3b2c2dfc6a253d97ee0e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:17:10 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/676bece5eb741bed541c3e976d9c6a31455997594dfe3b2c2dfc6a253d97ee0e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:17:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:17:10 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca8001230 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:17:10 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:17:10 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efc98002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:17:10 np0005540825 podman[265446]: 2025-12-01 10:17:10.61778619 +0000 UTC m=+0.173723567 container init 09dda1bcf9e1b18a43091f4e8d1348addd200c56c96349a9a8515b6a673e07f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:17:10 np0005540825 podman[265446]: 2025-12-01 10:17:10.630064482 +0000 UTC m=+0.186001839 container start 09dda1bcf9e1b18a43091f4e8d1348addd200c56c96349a9a8515b6a673e07f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_thompson, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  1 05:17:10 np0005540825 podman[265446]: 2025-12-01 10:17:10.637250006 +0000 UTC m=+0.193187363 container attach 09dda1bcf9e1b18a43091f4e8d1348addd200c56c96349a9a8515b6a673e07f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_thompson, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec  1 05:17:10 np0005540825 loving_thompson[265462]: {
Dec  1 05:17:10 np0005540825 loving_thompson[265462]:    "1": [
Dec  1 05:17:10 np0005540825 loving_thompson[265462]:        {
Dec  1 05:17:10 np0005540825 loving_thompson[265462]:            "devices": [
Dec  1 05:17:10 np0005540825 loving_thompson[265462]:                "/dev/loop3"
Dec  1 05:17:10 np0005540825 loving_thompson[265462]:            ],
Dec  1 05:17:10 np0005540825 loving_thompson[265462]:            "lv_name": "ceph_lv0",
Dec  1 05:17:10 np0005540825 loving_thompson[265462]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:17:10 np0005540825 loving_thompson[265462]:            "lv_size": "21470642176",
Dec  1 05:17:10 np0005540825 loving_thompson[265462]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 05:17:10 np0005540825 loving_thompson[265462]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:17:10 np0005540825 loving_thompson[265462]:            "name": "ceph_lv0",
Dec  1 05:17:10 np0005540825 loving_thompson[265462]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:17:10 np0005540825 loving_thompson[265462]:            "tags": {
Dec  1 05:17:10 np0005540825 loving_thompson[265462]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:17:10 np0005540825 loving_thompson[265462]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:17:10 np0005540825 loving_thompson[265462]:                "ceph.cephx_lockbox_secret": "",
Dec  1 05:17:10 np0005540825 loving_thompson[265462]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 05:17:10 np0005540825 loving_thompson[265462]:                "ceph.cluster_name": "ceph",
Dec  1 05:17:10 np0005540825 loving_thompson[265462]:                "ceph.crush_device_class": "",
Dec  1 05:17:10 np0005540825 loving_thompson[265462]:                "ceph.encrypted": "0",
Dec  1 05:17:10 np0005540825 loving_thompson[265462]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 05:17:10 np0005540825 loving_thompson[265462]:                "ceph.osd_id": "1",
Dec  1 05:17:10 np0005540825 loving_thompson[265462]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 05:17:10 np0005540825 loving_thompson[265462]:                "ceph.type": "block",
Dec  1 05:17:10 np0005540825 loving_thompson[265462]:                "ceph.vdo": "0",
Dec  1 05:17:10 np0005540825 loving_thompson[265462]:                "ceph.with_tpm": "0"
Dec  1 05:17:10 np0005540825 loving_thompson[265462]:            },
Dec  1 05:17:10 np0005540825 loving_thompson[265462]:            "type": "block",
Dec  1 05:17:10 np0005540825 loving_thompson[265462]:            "vg_name": "ceph_vg0"
Dec  1 05:17:10 np0005540825 loving_thompson[265462]:        }
Dec  1 05:17:10 np0005540825 loving_thompson[265462]:    ]
Dec  1 05:17:10 np0005540825 loving_thompson[265462]: }
Dec  1 05:17:10 np0005540825 systemd[1]: libpod-09dda1bcf9e1b18a43091f4e8d1348addd200c56c96349a9a8515b6a673e07f7.scope: Deactivated successfully.
Dec  1 05:17:10 np0005540825 podman[265446]: 2025-12-01 10:17:10.972514919 +0000 UTC m=+0.528452276 container died 09dda1bcf9e1b18a43091f4e8d1348addd200c56c96349a9a8515b6a673e07f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:17:11 np0005540825 systemd[1]: var-lib-containers-storage-overlay-676bece5eb741bed541c3e976d9c6a31455997594dfe3b2c2dfc6a253d97ee0e-merged.mount: Deactivated successfully.
Dec  1 05:17:11 np0005540825 podman[265446]: 2025-12-01 10:17:11.133528832 +0000 UTC m=+0.689466189 container remove 09dda1bcf9e1b18a43091f4e8d1348addd200c56c96349a9a8515b6a673e07f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_thompson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  1 05:17:11 np0005540825 systemd[1]: libpod-conmon-09dda1bcf9e1b18a43091f4e8d1348addd200c56c96349a9a8515b6a673e07f7.scope: Deactivated successfully.
Dec  1 05:17:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:17:11] "GET /metrics HTTP/1.1" 200 48547 "" "Prometheus/2.51.0"
Dec  1 05:17:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:17:11] "GET /metrics HTTP/1.1" 200 48547 "" "Prometheus/2.51.0"
Dec  1 05:17:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:17:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:17:11.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:17:11 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v797: 353 pgs: 353 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Dec  1 05:17:11 np0005540825 podman[265576]: 2025-12-01 10:17:11.883497264 +0000 UTC m=+0.066156479 container create a26cc172bb7a131565b5fabe3c032eeb9b36a67f8ecd5f5800771205f32c41a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_villani, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:17:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:17:11 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc00037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:17:11 np0005540825 systemd[1]: Started libpod-conmon-a26cc172bb7a131565b5fabe3c032eeb9b36a67f8ecd5f5800771205f32c41a1.scope.
Dec  1 05:17:11 np0005540825 podman[265576]: 2025-12-01 10:17:11.846008991 +0000 UTC m=+0.028668246 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:17:11 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:17:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:17:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:17:11.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:17:12 np0005540825 podman[265576]: 2025-12-01 10:17:12.009832639 +0000 UTC m=+0.192491934 container init a26cc172bb7a131565b5fabe3c032eeb9b36a67f8ecd5f5800771205f32c41a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_villani, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  1 05:17:12 np0005540825 podman[265576]: 2025-12-01 10:17:12.019697136 +0000 UTC m=+0.202356391 container start a26cc172bb7a131565b5fabe3c032eeb9b36a67f8ecd5f5800771205f32c41a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_villani, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  1 05:17:12 np0005540825 xenodochial_villani[265593]: 167 167
Dec  1 05:17:12 np0005540825 podman[265576]: 2025-12-01 10:17:12.02725378 +0000 UTC m=+0.209913025 container attach a26cc172bb7a131565b5fabe3c032eeb9b36a67f8ecd5f5800771205f32c41a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_villani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  1 05:17:12 np0005540825 systemd[1]: libpod-a26cc172bb7a131565b5fabe3c032eeb9b36a67f8ecd5f5800771205f32c41a1.scope: Deactivated successfully.
Dec  1 05:17:12 np0005540825 podman[265576]: 2025-12-01 10:17:12.030708824 +0000 UTC m=+0.213368079 container died a26cc172bb7a131565b5fabe3c032eeb9b36a67f8ecd5f5800771205f32c41a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec  1 05:17:12 np0005540825 systemd[1]: var-lib-containers-storage-overlay-7e91365a0abda4c71eb2ab5bf3f38ab02fbdc6918a3700a93d93fbe1a3dda96a-merged.mount: Deactivated successfully.
Dec  1 05:17:12 np0005540825 podman[265576]: 2025-12-01 10:17:12.080747426 +0000 UTC m=+0.263406641 container remove a26cc172bb7a131565b5fabe3c032eeb9b36a67f8ecd5f5800771205f32c41a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_villani, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 05:17:12 np0005540825 systemd[1]: libpod-conmon-a26cc172bb7a131565b5fabe3c032eeb9b36a67f8ecd5f5800771205f32c41a1.scope: Deactivated successfully.
Dec  1 05:17:12 np0005540825 podman[265617]: 2025-12-01 10:17:12.335730269 +0000 UTC m=+0.068076251 container create bb329feb1c44a067e11c4ee53e73f484816e8974457a2aaf6a95a1e79561984d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_cray, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:17:12 np0005540825 systemd[1]: Started libpod-conmon-bb329feb1c44a067e11c4ee53e73f484816e8974457a2aaf6a95a1e79561984d.scope.
Dec  1 05:17:12 np0005540825 podman[265617]: 2025-12-01 10:17:12.30951273 +0000 UTC m=+0.041858812 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:17:12 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:17:12 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dd705db960fd0c400bf2f95cab4f23d0a6c109f3aa0f86d0cb2aa0d6df2a91f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:17:12 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dd705db960fd0c400bf2f95cab4f23d0a6c109f3aa0f86d0cb2aa0d6df2a91f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:17:12 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dd705db960fd0c400bf2f95cab4f23d0a6c109f3aa0f86d0cb2aa0d6df2a91f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:17:12 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dd705db960fd0c400bf2f95cab4f23d0a6c109f3aa0f86d0cb2aa0d6df2a91f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:17:12 np0005540825 podman[265617]: 2025-12-01 10:17:12.450378238 +0000 UTC m=+0.182724300 container init bb329feb1c44a067e11c4ee53e73f484816e8974457a2aaf6a95a1e79561984d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec  1 05:17:12 np0005540825 podman[265617]: 2025-12-01 10:17:12.469069483 +0000 UTC m=+0.201415485 container start bb329feb1c44a067e11c4ee53e73f484816e8974457a2aaf6a95a1e79561984d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_cray, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  1 05:17:12 np0005540825 podman[265617]: 2025-12-01 10:17:12.47302082 +0000 UTC m=+0.205366882 container attach bb329feb1c44a067e11c4ee53e73f484816e8974457a2aaf6a95a1e79561984d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 05:17:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:17:12 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc8004e10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:17:12 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:17:12 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca80013d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:17:13 np0005540825 lvm[265709]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 05:17:13 np0005540825 lvm[265709]: VG ceph_vg0 finished
Dec  1 05:17:13 np0005540825 suspicious_cray[265634]: {}
Dec  1 05:17:13 np0005540825 systemd[1]: libpod-bb329feb1c44a067e11c4ee53e73f484816e8974457a2aaf6a95a1e79561984d.scope: Deactivated successfully.
Dec  1 05:17:13 np0005540825 systemd[1]: libpod-bb329feb1c44a067e11c4ee53e73f484816e8974457a2aaf6a95a1e79561984d.scope: Consumed 1.558s CPU time.
Dec  1 05:17:13 np0005540825 podman[265617]: 2025-12-01 10:17:13.300053966 +0000 UTC m=+1.032399998 container died bb329feb1c44a067e11c4ee53e73f484816e8974457a2aaf6a95a1e79561984d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_cray, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  1 05:17:13 np0005540825 systemd[1]: var-lib-containers-storage-overlay-5dd705db960fd0c400bf2f95cab4f23d0a6c109f3aa0f86d0cb2aa0d6df2a91f-merged.mount: Deactivated successfully.
Dec  1 05:17:13 np0005540825 podman[265617]: 2025-12-01 10:17:13.35087893 +0000 UTC m=+1.083224932 container remove bb329feb1c44a067e11c4ee53e73f484816e8974457a2aaf6a95a1e79561984d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:17:13 np0005540825 systemd[1]: libpod-conmon-bb329feb1c44a067e11c4ee53e73f484816e8974457a2aaf6a95a1e79561984d.scope: Deactivated successfully.
Dec  1 05:17:13 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 05:17:13 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:17:13 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 05:17:13 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:17:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:17:13.645Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:17:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:17:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:17:13.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:17:13 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v798: 353 pgs: 353 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Dec  1 05:17:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:17:13 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efc980036c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:17:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:17:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:17:13.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:17:13 np0005540825 nova_compute[256151]: 2025-12-01 10:17:13.982 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:17:14 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:17:14 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:17:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:17:14 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc00037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:17:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:17:14 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc8004e30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:17:14 np0005540825 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec  1 05:17:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:17:15 np0005540825 nova_compute[256151]: 2025-12-01 10:17:15.435 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:17:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:17:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:17:15.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:17:15 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v799: 353 pgs: 353 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 76 op/s
Dec  1 05:17:15 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:17:15 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca80044e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:17:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:17:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:17:15.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:17:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:17:16 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efc980036c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:17:16 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:17:16 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc00037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:17:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:17:17.203Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:17:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:17:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:17:17.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:17:17 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v800: 353 pgs: 353 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 588 KiB/s wr, 85 op/s
Dec  1 05:17:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:17:17 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc8004e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:17:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:17:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:17:17.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:17:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:17:18 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efca80044e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:17:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:17:18 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efc980036c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:17:19 np0005540825 nova_compute[256151]: 2025-12-01 10:17:19.015 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:17:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:17:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:17:19.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:17:19 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v801: 353 pgs: 353 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec  1 05:17:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:17:19 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc00037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  1 05:17:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:17:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:17:19.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:17:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:17:20 np0005540825 nova_compute[256151]: 2025-12-01 10:17:20.437 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:17:20 np0005540825 kernel: ganesha.nfsd[262977]: segfault at 50 ip 00007efd7b4bd32e sp 00007efd3d7f9210 error 4 in libntirpc.so.5.8[7efd7b4a2000+2c000] likely on CPU 3 (core 0, socket 3)
Dec  1 05:17:20 np0005540825 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec  1 05:17:20 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[262160]: 01/12/2025 10:17:20 : epoch 692d6aa4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efcc8004e70 fd 39 proxy ignored for local
Dec  1 05:17:20 np0005540825 systemd[1]: Started Process Core Dump (PID 265759/UID 0).
Dec  1 05:17:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:17:21] "GET /metrics HTTP/1.1" 200 48547 "" "Prometheus/2.51.0"
Dec  1 05:17:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:17:21] "GET /metrics HTTP/1.1" 200 48547 "" "Prometheus/2.51.0"
Dec  1 05:17:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:17:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:17:21.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:17:21 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v802: 353 pgs: 353 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec  1 05:17:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:17:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:17:21.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:17:22 np0005540825 podman[265763]: 2025-12-01 10:17:22.222856602 +0000 UTC m=+0.078185894 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  1 05:17:22 np0005540825 systemd-coredump[265760]: Process 262164 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 56:#012#0  0x00007efd7b4bd32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Dec  1 05:17:22 np0005540825 systemd[1]: systemd-coredump@10-265759-0.service: Deactivated successfully.
Dec  1 05:17:22 np0005540825 systemd[1]: systemd-coredump@10-265759-0.service: Consumed 1.238s CPU time.
Dec  1 05:17:22 np0005540825 podman[265787]: 2025-12-01 10:17:22.515193255 +0000 UTC m=+0.034504274 container died 175072eb9ad8288754525f1835b155d486baa9b9919fdcbe6ed4f80c20993ee5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:17:22 np0005540825 systemd[1]: var-lib-containers-storage-overlay-1a9f823e5f78d38f77547afe72b9f24b6a0fcaa37b3b2d117d60ff427085d7cd-merged.mount: Deactivated successfully.
Dec  1 05:17:22 np0005540825 podman[265787]: 2025-12-01 10:17:22.562053642 +0000 UTC m=+0.081364701 container remove 175072eb9ad8288754525f1835b155d486baa9b9919fdcbe6ed4f80c20993ee5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:17:22 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Main process exited, code=exited, status=139/n/a
Dec  1 05:17:23 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Failed with result 'exit-code'.
Dec  1 05:17:23 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Consumed 2.094s CPU time.
Dec  1 05:17:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:17:23.648Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:17:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:17:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:17:23.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:17:23 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v803: 353 pgs: 353 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 69 op/s
Dec  1 05:17:23 np0005540825 ovn_controller[153404]: 2025-12-01T10:17:23Z|00038|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Dec  1 05:17:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:17:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:17:23.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:17:24 np0005540825 nova_compute[256151]: 2025-12-01 10:17:24.017 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:17:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:17:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:17:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:17:25 np0005540825 nova_compute[256151]: 2025-12-01 10:17:25.490 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:17:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:17:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:17:25.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:17:25 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v804: 353 pgs: 353 active+clean; 114 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.7 MiB/s wr, 111 op/s
Dec  1 05:17:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:17:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:17:25.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:17:26 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/101726 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 05:17:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:17:27.204Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:17:27 np0005540825 podman[265861]: 2025-12-01 10:17:27.211858744 +0000 UTC m=+0.072819009 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 05:17:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:17:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:17:27.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:17:27 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v805: 353 pgs: 353 active+clean; 121 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 1009 KiB/s rd, 2.1 MiB/s wr, 85 op/s
Dec  1 05:17:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [WARNING] 334/101727 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  1 05:17:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [NOTICE] 334/101727 (4) : haproxy version is 2.3.17-d1c9119
Dec  1 05:17:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [NOTICE] 334/101727 (4) : path to executable is /usr/local/sbin/haproxy
Dec  1 05:17:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd[96518]: [ALERT] 334/101727 (4) : backend 'backend' has no server available!
Dec  1 05:17:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:17:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:17:27.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:17:29 np0005540825 nova_compute[256151]: 2025-12-01 10:17:29.033 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:17:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:17:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:17:29.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:17:29 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v806: 353 pgs: 353 active+clean; 121 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 318 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Dec  1 05:17:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:17:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:17:29.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:17:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:17:30 np0005540825 nova_compute[256151]: 2025-12-01 10:17:30.528 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:17:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:17:31] "GET /metrics HTTP/1.1" 200 48559 "" "Prometheus/2.51.0"
Dec  1 05:17:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:17:31] "GET /metrics HTTP/1.1" 200 48559 "" "Prometheus/2.51.0"
Dec  1 05:17:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:17:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:17:31.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:17:31 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v807: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec  1 05:17:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:17:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:17:31.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:17:33 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Scheduled restart job, restart counter is at 11.
Dec  1 05:17:33 np0005540825 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.pytvsu for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 05:17:33 np0005540825 systemd[1]: ceph-365f19c2-81e5-5edd-b6b4-280555214d3a@nfs.cephfs.2.0.compute-0.pytvsu.service: Consumed 2.094s CPU time.
Dec  1 05:17:33 np0005540825 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.pytvsu for 365f19c2-81e5-5edd-b6b4-280555214d3a...
Dec  1 05:17:33 np0005540825 podman[265934]: 2025-12-01 10:17:33.490268168 +0000 UTC m=+0.074421603 container create 7a97e5c792e90c0e9beef244d64f90b782f45501ef79e0290396630e04fbacec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:17:33 np0005540825 podman[265934]: 2025-12-01 10:17:33.461583293 +0000 UTC m=+0.045736788 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:17:33 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/618e30818dd79cc5abcbe827990212869bced81f23640480cf1350ba35b137bc/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec  1 05:17:33 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/618e30818dd79cc5abcbe827990212869bced81f23640480cf1350ba35b137bc/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:17:33 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/618e30818dd79cc5abcbe827990212869bced81f23640480cf1350ba35b137bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:17:33 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/618e30818dd79cc5abcbe827990212869bced81f23640480cf1350ba35b137bc/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.pytvsu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:17:33 np0005540825 podman[265934]: 2025-12-01 10:17:33.57764228 +0000 UTC m=+0.161795795 container init 7a97e5c792e90c0e9beef244d64f90b782f45501ef79e0290396630e04fbacec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  1 05:17:33 np0005540825 podman[265934]: 2025-12-01 10:17:33.590564249 +0000 UTC m=+0.174717684 container start 7a97e5c792e90c0e9beef244d64f90b782f45501ef79e0290396630e04fbacec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:17:33 np0005540825 bash[265934]: 7a97e5c792e90c0e9beef244d64f90b782f45501ef79e0290396630e04fbacec
Dec  1 05:17:33 np0005540825 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.pytvsu for 365f19c2-81e5-5edd-b6b4-280555214d3a.
Dec  1 05:17:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:17:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec  1 05:17:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:17:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec  1 05:17:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:17:33.650Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:17:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:17:33.650Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:17:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:17:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec  1 05:17:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:17:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec  1 05:17:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:17:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec  1 05:17:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:17:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec  1 05:17:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:17:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec  1 05:17:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:17:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:17:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:17:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:17:33.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:17:33 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v808: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec  1 05:17:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:17:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:17:33.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:17:34 np0005540825 nova_compute[256151]: 2025-12-01 10:17:34.071 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:17:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-crash-compute-0[79836]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Dec  1 05:17:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:17:35 np0005540825 nova_compute[256151]: 2025-12-01 10:17:35.531 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:17:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:17:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:17:35.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:17:35 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v809: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec  1 05:17:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:17:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:17:35.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:17:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:17:37.205Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:17:37 np0005540825 podman[265995]: 2025-12-01 10:17:37.239452974 +0000 UTC m=+0.106994623 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  1 05:17:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:17:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:17:37.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:17:37 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v810: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 138 KiB/s rd, 444 KiB/s wr, 21 op/s
Dec  1 05:17:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:17:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:17:38.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:17:39 np0005540825 nova_compute[256151]: 2025-12-01 10:17:39.077 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_10:17:39
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'volumes', 'default.rgw.control', '.nfs', '.mgr', 'default.rgw.log', 'backups', 'vms', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 05:17:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:17:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:17:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:17:39 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:17:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:17:39 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:17:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:17:39 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:17:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:17:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:17:39.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v811: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s rd, 12 KiB/s wr, 2 op/s
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007589550978381194 of space, bias 1.0, pg target 0.22768652935143582 quantized to 32 (current 32)
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:17:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:17:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:17:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:17:40.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:17:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:17:40 np0005540825 nova_compute[256151]: 2025-12-01 10:17:40.581 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:17:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:17:41] "GET /metrics HTTP/1.1" 200 48557 "" "Prometheus/2.51.0"
Dec  1 05:17:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:17:41] "GET /metrics HTTP/1.1" 200 48557 "" "Prometheus/2.51.0"
Dec  1 05:17:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:17:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:17:41.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:17:41 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v812: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 8.1 KiB/s rd, 15 KiB/s wr, 3 op/s
Dec  1 05:17:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:17:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:17:42.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:17:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:17:43.652Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:17:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:17:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:17:43.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:17:43 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v813: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 3.2 KiB/s wr, 1 op/s
Dec  1 05:17:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:17:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:17:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:17:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:17:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:17:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:17:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:17:44 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:17:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:17:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:17:44.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:17:44 np0005540825 nova_compute[256151]: 2025-12-01 10:17:44.124 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:17:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:17:45 np0005540825 nova_compute[256151]: 2025-12-01 10:17:45.583 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:17:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:17:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:17:45.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:17:45 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v814: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 7.2 KiB/s wr, 2 op/s
Dec  1 05:17:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:17:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:17:46.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:17:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:17:47.205Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:17:47 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:17:47.683 163291 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '36:10:da', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '4e:5c:35:98:90:37'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 05:17:47 np0005540825 nova_compute[256151]: 2025-12-01 10:17:47.683 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:17:47 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:17:47.685 163291 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 05:17:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:17:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:17:47.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:17:47 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v815: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s rd, 6.2 KiB/s wr, 2 op/s
Dec  1 05:17:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:17:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:17:48.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:17:48 np0005540825 nova_compute[256151]: 2025-12-01 10:17:48.028 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:17:48 np0005540825 nova_compute[256151]: 2025-12-01 10:17:48.028 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 05:17:48 np0005540825 nova_compute[256151]: 2025-12-01 10:17:48.028 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 05:17:48 np0005540825 nova_compute[256151]: 2025-12-01 10:17:48.071 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 05:17:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:17:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:17:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:17:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:17:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:17:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:17:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:17:49 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:17:49 np0005540825 nova_compute[256151]: 2025-12-01 10:17:49.065 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:17:49 np0005540825 nova_compute[256151]: 2025-12-01 10:17:49.159 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:17:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:17:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:17:49.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:17:49 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v816: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 8.1 KiB/s rd, 6.2 KiB/s wr, 2 op/s
Dec  1 05:17:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:17:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:17:50.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:17:50 np0005540825 nova_compute[256151]: 2025-12-01 10:17:50.054 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:17:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:17:50 np0005540825 nova_compute[256151]: 2025-12-01 10:17:50.584 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:17:51 np0005540825 nova_compute[256151]: 2025-12-01 10:17:51.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:17:51 np0005540825 nova_compute[256151]: 2025-12-01 10:17:51.028 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:17:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:17:51] "GET /metrics HTTP/1.1" 200 48557 "" "Prometheus/2.51.0"
Dec  1 05:17:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:17:51] "GET /metrics HTTP/1.1" 200 48557 "" "Prometheus/2.51.0"
Dec  1 05:17:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:17:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:17:51.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:17:51 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v817: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 7.3 KiB/s wr, 30 op/s
Dec  1 05:17:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:17:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:17:52.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:17:52 np0005540825 nova_compute[256151]: 2025-12-01 10:17:52.026 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:17:53 np0005540825 nova_compute[256151]: 2025-12-01 10:17:53.026 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:17:53 np0005540825 nova_compute[256151]: 2025-12-01 10:17:53.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:17:53 np0005540825 nova_compute[256151]: 2025-12-01 10:17:53.027 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 05:17:53 np0005540825 podman[266063]: 2025-12-01 10:17:53.227893994 +0000 UTC m=+0.086195601 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:17:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:17:53.652Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:17:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:17:53.653Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:17:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:17:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:17:53.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:17:53 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v818: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Dec  1 05:17:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:17:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:17:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:17:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:17:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:17:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:17:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:17:54 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:17:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:17:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:17:54.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:17:54 np0005540825 nova_compute[256151]: 2025-12-01 10:17:54.214 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:17:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:17:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:17:54 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:17:54.687 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4d9738cf-2abf-48e2-9303-677669784912, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:17:55 np0005540825 nova_compute[256151]: 2025-12-01 10:17:55.026 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:17:55 np0005540825 nova_compute[256151]: 2025-12-01 10:17:55.050 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:17:55 np0005540825 nova_compute[256151]: 2025-12-01 10:17:55.050 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:17:55 np0005540825 nova_compute[256151]: 2025-12-01 10:17:55.051 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:17:55 np0005540825 nova_compute[256151]: 2025-12-01 10:17:55.051 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 05:17:55 np0005540825 nova_compute[256151]: 2025-12-01 10:17:55.052 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:17:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:17:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:17:55 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1773807356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:17:55 np0005540825 nova_compute[256151]: 2025-12-01 10:17:55.519 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:17:55 np0005540825 nova_compute[256151]: 2025-12-01 10:17:55.589 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:17:55 np0005540825 nova_compute[256151]: 2025-12-01 10:17:55.774 256155 WARNING nova.virt.libvirt.driver [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 05:17:55 np0005540825 nova_compute[256151]: 2025-12-01 10:17:55.776 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4634MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 05:17:55 np0005540825 nova_compute[256151]: 2025-12-01 10:17:55.776 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:17:55 np0005540825 nova_compute[256151]: 2025-12-01 10:17:55.777 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:17:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:17:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:17:55.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:17:55 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v819: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Dec  1 05:17:56 np0005540825 nova_compute[256151]: 2025-12-01 10:17:56.005 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 05:17:56 np0005540825 nova_compute[256151]: 2025-12-01 10:17:56.005 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 05:17:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:17:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:17:56.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:17:56 np0005540825 nova_compute[256151]: 2025-12-01 10:17:56.030 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:17:56 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:17:56 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2141696538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:17:56 np0005540825 nova_compute[256151]: 2025-12-01 10:17:56.519 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:17:56 np0005540825 nova_compute[256151]: 2025-12-01 10:17:56.526 256155 DEBUG nova.compute.provider_tree [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed in ProviderTree for provider: 5efe20fe-1981-4bd9-8786-d9fddc89a5ae update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 05:17:56 np0005540825 nova_compute[256151]: 2025-12-01 10:17:56.543 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 05:17:56 np0005540825 nova_compute[256151]: 2025-12-01 10:17:56.547 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 05:17:56 np0005540825 nova_compute[256151]: 2025-12-01 10:17:56.547 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:17:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:17:57.207Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:17:57 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v820: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec  1 05:17:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:17:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:17:57.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:17:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:17:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:17:58.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:17:58 np0005540825 podman[266133]: 2025-12-01 10:17:58.236614628 +0000 UTC m=+0.096123199 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  1 05:17:58 np0005540825 nova_compute[256151]: 2025-12-01 10:17:58.548 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:17:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:17:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:17:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:17:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:17:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:17:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:17:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:17:59 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:17:59 np0005540825 nova_compute[256151]: 2025-12-01 10:17:59.217 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:17:59 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v821: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec  1 05:17:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:17:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:17:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:17:59.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:18:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:18:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:18:00.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:18:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:18:00 np0005540825 nova_compute[256151]: 2025-12-01 10:18:00.591 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:18:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:18:01] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:18:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:18:01] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:18:01 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v822: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec  1 05:18:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:18:01.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:18:02.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:18:03.654Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:18:03 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v823: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:18:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:18:03.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:18:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:18:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:18:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:04 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:18:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:18:04.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:04 np0005540825 nova_compute[256151]: 2025-12-01 10:18:04.268 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:18:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:18:04.574 163291 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:18:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:18:04.575 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:18:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:18:04.575 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:18:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:18:05 np0005540825 nova_compute[256151]: 2025-12-01 10:18:05.594 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:18:05 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v824: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:18:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:18:05.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:18:06.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:07 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  1 05:18:07 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2069587201' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  1 05:18:07 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  1 05:18:07 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2069587201' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  1 05:18:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:18:07.208Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:18:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:18:07.208Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:18:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:18:07.208Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:18:07 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v825: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:18:07 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:07 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:07 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:18:07.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:18:08.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:08 np0005540825 podman[266189]: 2025-12-01 10:18:08.287505777 +0000 UTC m=+0.143484020 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 05:18:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:18:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:18:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:18:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:09 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:18:09 np0005540825 nova_compute[256151]: 2025-12-01 10:18:09.271 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:18:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:18:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:18:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:18:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:18:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:18:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:18:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:18:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:18:09 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v826: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:18:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:18:09.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:18:10.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:18:10 np0005540825 nova_compute[256151]: 2025-12-01 10:18:10.596 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:18:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:18:11] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Dec  1 05:18:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:18:11] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Dec  1 05:18:11 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v827: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:18:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:18:11.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.002000054s ======
Dec  1 05:18:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:18:12.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec  1 05:18:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:18:13.655Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:18:13 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v828: 353 pgs: 353 active+clean; 41 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:18:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:18:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:18:13.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:18:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:18:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:18:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:18:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:14 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:18:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:18:14.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:14 np0005540825 nova_compute[256151]: 2025-12-01 10:18:14.308 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:18:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  1 05:18:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:18:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  1 05:18:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:18:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 05:18:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:18:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 05:18:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:18:15 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:18:15 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:18:15 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:18:15 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:18:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:18:15 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:18:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 05:18:15 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:18:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 05:18:15 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:18:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:18:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 05:18:15 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:18:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 05:18:15 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 05:18:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 05:18:15 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:18:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:18:15 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:18:15 np0005540825 nova_compute[256151]: 2025-12-01 10:18:15.598 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:18:15 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v829: 353 pgs: 353 active+clean; 68 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.1 MiB/s wr, 26 op/s
Dec  1 05:18:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:18:15.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:15 np0005540825 podman[266468]: 2025-12-01 10:18:15.980062408 +0000 UTC m=+0.058886433 container create fbd79836bfed74d2dc22751f71d673eee3edafbdb215cbdc979ee529759ad1b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:18:16 np0005540825 systemd[1]: Started libpod-conmon-fbd79836bfed74d2dc22751f71d673eee3edafbdb215cbdc979ee529759ad1b5.scope.
Dec  1 05:18:16 np0005540825 podman[266468]: 2025-12-01 10:18:15.953581972 +0000 UTC m=+0.032406057 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:18:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:18:16.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:16 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:18:16 np0005540825 podman[266468]: 2025-12-01 10:18:16.084737608 +0000 UTC m=+0.163561653 container init fbd79836bfed74d2dc22751f71d673eee3edafbdb215cbdc979ee529759ad1b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  1 05:18:16 np0005540825 podman[266468]: 2025-12-01 10:18:16.095601201 +0000 UTC m=+0.174425236 container start fbd79836bfed74d2dc22751f71d673eee3edafbdb215cbdc979ee529759ad1b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:18:16 np0005540825 podman[266468]: 2025-12-01 10:18:16.099825186 +0000 UTC m=+0.178649221 container attach fbd79836bfed74d2dc22751f71d673eee3edafbdb215cbdc979ee529759ad1b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 05:18:16 np0005540825 mystifying_blackburn[266484]: 167 167
Dec  1 05:18:16 np0005540825 systemd[1]: libpod-fbd79836bfed74d2dc22751f71d673eee3edafbdb215cbdc979ee529759ad1b5.scope: Deactivated successfully.
Dec  1 05:18:16 np0005540825 podman[266468]: 2025-12-01 10:18:16.104193154 +0000 UTC m=+0.183017179 container died fbd79836bfed74d2dc22751f71d673eee3edafbdb215cbdc979ee529759ad1b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_blackburn, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec  1 05:18:16 np0005540825 systemd[1]: var-lib-containers-storage-overlay-957de1b3b867d9412578e03113bad2939b99ca0ea6eacdb630b18c0490c83839-merged.mount: Deactivated successfully.
Dec  1 05:18:16 np0005540825 podman[266468]: 2025-12-01 10:18:16.154414011 +0000 UTC m=+0.233238006 container remove fbd79836bfed74d2dc22751f71d673eee3edafbdb215cbdc979ee529759ad1b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  1 05:18:16 np0005540825 systemd[1]: libpod-conmon-fbd79836bfed74d2dc22751f71d673eee3edafbdb215cbdc979ee529759ad1b5.scope: Deactivated successfully.
Dec  1 05:18:16 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:18:16 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:18:16 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:18:16 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:18:16 np0005540825 podman[266507]: 2025-12-01 10:18:16.336902124 +0000 UTC m=+0.047706310 container create 15fa16882ffff26ac2697e64315e485bbd0e42c7195f769eb13e211a7e3ed0d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  1 05:18:16 np0005540825 systemd[1]: Started libpod-conmon-15fa16882ffff26ac2697e64315e485bbd0e42c7195f769eb13e211a7e3ed0d0.scope.
Dec  1 05:18:16 np0005540825 podman[266507]: 2025-12-01 10:18:16.314827957 +0000 UTC m=+0.025632143 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:18:16 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:18:16 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dae588305a0cb1cf9e690aa47f52154eb818e191fa0f4821018cb6cf81a3b5b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:18:16 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dae588305a0cb1cf9e690aa47f52154eb818e191fa0f4821018cb6cf81a3b5b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:18:16 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dae588305a0cb1cf9e690aa47f52154eb818e191fa0f4821018cb6cf81a3b5b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:18:16 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dae588305a0cb1cf9e690aa47f52154eb818e191fa0f4821018cb6cf81a3b5b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:18:16 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dae588305a0cb1cf9e690aa47f52154eb818e191fa0f4821018cb6cf81a3b5b4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:18:16 np0005540825 podman[266507]: 2025-12-01 10:18:16.446139357 +0000 UTC m=+0.156943593 container init 15fa16882ffff26ac2697e64315e485bbd0e42c7195f769eb13e211a7e3ed0d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_dubinsky, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:18:16 np0005540825 podman[266507]: 2025-12-01 10:18:16.460076154 +0000 UTC m=+0.170880340 container start 15fa16882ffff26ac2697e64315e485bbd0e42c7195f769eb13e211a7e3ed0d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_dubinsky, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:18:16 np0005540825 podman[266507]: 2025-12-01 10:18:16.465407198 +0000 UTC m=+0.176211454 container attach 15fa16882ffff26ac2697e64315e485bbd0e42c7195f769eb13e211a7e3ed0d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:18:16 np0005540825 competent_dubinsky[266523]: --> passed data devices: 0 physical, 1 LVM
Dec  1 05:18:16 np0005540825 competent_dubinsky[266523]: --> All data devices are unavailable
Dec  1 05:18:16 np0005540825 systemd[1]: libpod-15fa16882ffff26ac2697e64315e485bbd0e42c7195f769eb13e211a7e3ed0d0.scope: Deactivated successfully.
Dec  1 05:18:16 np0005540825 podman[266507]: 2025-12-01 10:18:16.855929244 +0000 UTC m=+0.566733440 container died 15fa16882ffff26ac2697e64315e485bbd0e42c7195f769eb13e211a7e3ed0d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec  1 05:18:16 np0005540825 systemd[1]: var-lib-containers-storage-overlay-dae588305a0cb1cf9e690aa47f52154eb818e191fa0f4821018cb6cf81a3b5b4-merged.mount: Deactivated successfully.
Dec  1 05:18:16 np0005540825 podman[266507]: 2025-12-01 10:18:16.910359236 +0000 UTC m=+0.621163432 container remove 15fa16882ffff26ac2697e64315e485bbd0e42c7195f769eb13e211a7e3ed0d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_dubinsky, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec  1 05:18:16 np0005540825 systemd[1]: libpod-conmon-15fa16882ffff26ac2697e64315e485bbd0e42c7195f769eb13e211a7e3ed0d0.scope: Deactivated successfully.
Dec  1 05:18:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:18:17.209Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:18:17 np0005540825 podman[266642]: 2025-12-01 10:18:17.672366134 +0000 UTC m=+0.073401135 container create 9f75451729f043fb25cb13b6282d9a5f058d2e8945165b5fb926530d586ddcaa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_driscoll, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 05:18:17 np0005540825 systemd[1]: Started libpod-conmon-9f75451729f043fb25cb13b6282d9a5f058d2e8945165b5fb926530d586ddcaa.scope.
Dec  1 05:18:17 np0005540825 podman[266642]: 2025-12-01 10:18:17.642291991 +0000 UTC m=+0.043327042 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:18:17 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:18:17 np0005540825 podman[266642]: 2025-12-01 10:18:17.775879142 +0000 UTC m=+0.176914203 container init 9f75451729f043fb25cb13b6282d9a5f058d2e8945165b5fb926530d586ddcaa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_driscoll, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:18:17 np0005540825 podman[266642]: 2025-12-01 10:18:17.786465918 +0000 UTC m=+0.187500919 container start 9f75451729f043fb25cb13b6282d9a5f058d2e8945165b5fb926530d586ddcaa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_driscoll, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Dec  1 05:18:17 np0005540825 podman[266642]: 2025-12-01 10:18:17.790530808 +0000 UTC m=+0.191565859 container attach 9f75451729f043fb25cb13b6282d9a5f058d2e8945165b5fb926530d586ddcaa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_driscoll, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 05:18:17 np0005540825 eloquent_driscoll[266660]: 167 167
Dec  1 05:18:17 np0005540825 systemd[1]: libpod-9f75451729f043fb25cb13b6282d9a5f058d2e8945165b5fb926530d586ddcaa.scope: Deactivated successfully.
Dec  1 05:18:17 np0005540825 podman[266642]: 2025-12-01 10:18:17.794894076 +0000 UTC m=+0.195929067 container died 9f75451729f043fb25cb13b6282d9a5f058d2e8945165b5fb926530d586ddcaa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_driscoll, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:18:17 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v830: 353 pgs: 353 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  1 05:18:17 np0005540825 systemd[1]: var-lib-containers-storage-overlay-33a37252632918e6e0bc8c1c1605f6eb68703e5d5873d54eacf48abc5db3295e-merged.mount: Deactivated successfully.
Dec  1 05:18:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:18:17.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:17 np0005540825 podman[266642]: 2025-12-01 10:18:17.848727582 +0000 UTC m=+0.249762583 container remove 9f75451729f043fb25cb13b6282d9a5f058d2e8945165b5fb926530d586ddcaa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_driscoll, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec  1 05:18:17 np0005540825 systemd[1]: libpod-conmon-9f75451729f043fb25cb13b6282d9a5f058d2e8945165b5fb926530d586ddcaa.scope: Deactivated successfully.
Dec  1 05:18:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:18:18.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:18 np0005540825 podman[266683]: 2025-12-01 10:18:18.090367212 +0000 UTC m=+0.061560766 container create 819862b296c80ee45498afe4371686835296459884a821eaf909fe9206151098 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_neumann, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:18:18 np0005540825 systemd[1]: Started libpod-conmon-819862b296c80ee45498afe4371686835296459884a821eaf909fe9206151098.scope.
Dec  1 05:18:18 np0005540825 podman[266683]: 2025-12-01 10:18:18.066834386 +0000 UTC m=+0.038027949 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:18:18 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:18:18 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb057ddc11b393a330428eb06473b587870b665cc93a623bf937ca57e310a90d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:18:18 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb057ddc11b393a330428eb06473b587870b665cc93a623bf937ca57e310a90d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:18:18 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb057ddc11b393a330428eb06473b587870b665cc93a623bf937ca57e310a90d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:18:18 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb057ddc11b393a330428eb06473b587870b665cc93a623bf937ca57e310a90d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:18:18 np0005540825 podman[266683]: 2025-12-01 10:18:18.203614894 +0000 UTC m=+0.174808497 container init 819862b296c80ee45498afe4371686835296459884a821eaf909fe9206151098 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec  1 05:18:18 np0005540825 podman[266683]: 2025-12-01 10:18:18.215896856 +0000 UTC m=+0.187090419 container start 819862b296c80ee45498afe4371686835296459884a821eaf909fe9206151098 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:18:18 np0005540825 podman[266683]: 2025-12-01 10:18:18.220388587 +0000 UTC m=+0.191582150 container attach 819862b296c80ee45498afe4371686835296459884a821eaf909fe9206151098 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_neumann, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  1 05:18:18 np0005540825 compassionate_neumann[266699]: {
Dec  1 05:18:18 np0005540825 compassionate_neumann[266699]:    "1": [
Dec  1 05:18:18 np0005540825 compassionate_neumann[266699]:        {
Dec  1 05:18:18 np0005540825 compassionate_neumann[266699]:            "devices": [
Dec  1 05:18:18 np0005540825 compassionate_neumann[266699]:                "/dev/loop3"
Dec  1 05:18:18 np0005540825 compassionate_neumann[266699]:            ],
Dec  1 05:18:18 np0005540825 compassionate_neumann[266699]:            "lv_name": "ceph_lv0",
Dec  1 05:18:18 np0005540825 compassionate_neumann[266699]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:18:18 np0005540825 compassionate_neumann[266699]:            "lv_size": "21470642176",
Dec  1 05:18:18 np0005540825 compassionate_neumann[266699]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 05:18:18 np0005540825 compassionate_neumann[266699]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:18:18 np0005540825 compassionate_neumann[266699]:            "name": "ceph_lv0",
Dec  1 05:18:18 np0005540825 compassionate_neumann[266699]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:18:18 np0005540825 compassionate_neumann[266699]:            "tags": {
Dec  1 05:18:18 np0005540825 compassionate_neumann[266699]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:18:18 np0005540825 compassionate_neumann[266699]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:18:18 np0005540825 compassionate_neumann[266699]:                "ceph.cephx_lockbox_secret": "",
Dec  1 05:18:18 np0005540825 compassionate_neumann[266699]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 05:18:18 np0005540825 compassionate_neumann[266699]:                "ceph.cluster_name": "ceph",
Dec  1 05:18:18 np0005540825 compassionate_neumann[266699]:                "ceph.crush_device_class": "",
Dec  1 05:18:18 np0005540825 compassionate_neumann[266699]:                "ceph.encrypted": "0",
Dec  1 05:18:18 np0005540825 compassionate_neumann[266699]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 05:18:18 np0005540825 compassionate_neumann[266699]:                "ceph.osd_id": "1",
Dec  1 05:18:18 np0005540825 compassionate_neumann[266699]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 05:18:18 np0005540825 compassionate_neumann[266699]:                "ceph.type": "block",
Dec  1 05:18:18 np0005540825 compassionate_neumann[266699]:                "ceph.vdo": "0",
Dec  1 05:18:18 np0005540825 compassionate_neumann[266699]:                "ceph.with_tpm": "0"
Dec  1 05:18:18 np0005540825 compassionate_neumann[266699]:            },
Dec  1 05:18:18 np0005540825 compassionate_neumann[266699]:            "type": "block",
Dec  1 05:18:18 np0005540825 compassionate_neumann[266699]:            "vg_name": "ceph_vg0"
Dec  1 05:18:18 np0005540825 compassionate_neumann[266699]:        }
Dec  1 05:18:18 np0005540825 compassionate_neumann[266699]:    ]
Dec  1 05:18:18 np0005540825 compassionate_neumann[266699]: }
Dec  1 05:18:18 np0005540825 systemd[1]: libpod-819862b296c80ee45498afe4371686835296459884a821eaf909fe9206151098.scope: Deactivated successfully.
Dec  1 05:18:18 np0005540825 podman[266683]: 2025-12-01 10:18:18.559448293 +0000 UTC m=+0.530641816 container died 819862b296c80ee45498afe4371686835296459884a821eaf909fe9206151098 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2)
Dec  1 05:18:18 np0005540825 systemd[1]: var-lib-containers-storage-overlay-bb057ddc11b393a330428eb06473b587870b665cc93a623bf937ca57e310a90d-merged.mount: Deactivated successfully.
Dec  1 05:18:18 np0005540825 podman[266683]: 2025-12-01 10:18:18.619736872 +0000 UTC m=+0.590930435 container remove 819862b296c80ee45498afe4371686835296459884a821eaf909fe9206151098 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:18:18 np0005540825 systemd[1]: libpod-conmon-819862b296c80ee45498afe4371686835296459884a821eaf909fe9206151098.scope: Deactivated successfully.
Dec  1 05:18:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:18:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:19 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:18:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:19 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:18:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:19 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:18:19 np0005540825 nova_compute[256151]: 2025-12-01 10:18:19.310 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:18:19 np0005540825 podman[266811]: 2025-12-01 10:18:19.327612487 +0000 UTC m=+0.075755448 container create 3d9afcd0fae9ff53e65c56d12d2a6e2f4813f686d523908c0059af04b8b4cfd3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True)
Dec  1 05:18:19 np0005540825 systemd[1]: Started libpod-conmon-3d9afcd0fae9ff53e65c56d12d2a6e2f4813f686d523908c0059af04b8b4cfd3.scope.
Dec  1 05:18:19 np0005540825 podman[266811]: 2025-12-01 10:18:19.292104558 +0000 UTC m=+0.040247579 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:18:19 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:18:19 np0005540825 podman[266811]: 2025-12-01 10:18:19.432831512 +0000 UTC m=+0.180974453 container init 3d9afcd0fae9ff53e65c56d12d2a6e2f4813f686d523908c0059af04b8b4cfd3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:18:19 np0005540825 podman[266811]: 2025-12-01 10:18:19.445010251 +0000 UTC m=+0.193153212 container start 3d9afcd0fae9ff53e65c56d12d2a6e2f4813f686d523908c0059af04b8b4cfd3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  1 05:18:19 np0005540825 podman[266811]: 2025-12-01 10:18:19.44943095 +0000 UTC m=+0.197573881 container attach 3d9afcd0fae9ff53e65c56d12d2a6e2f4813f686d523908c0059af04b8b4cfd3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec  1 05:18:19 np0005540825 frosty_agnesi[266827]: 167 167
Dec  1 05:18:19 np0005540825 systemd[1]: libpod-3d9afcd0fae9ff53e65c56d12d2a6e2f4813f686d523908c0059af04b8b4cfd3.scope: Deactivated successfully.
Dec  1 05:18:19 np0005540825 podman[266811]: 2025-12-01 10:18:19.45349457 +0000 UTC m=+0.201637571 container died 3d9afcd0fae9ff53e65c56d12d2a6e2f4813f686d523908c0059af04b8b4cfd3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec  1 05:18:19 np0005540825 systemd[1]: var-lib-containers-storage-overlay-9f430d28e43dd5699222f5363cf7e4b7df85da0402bc956434c5d689eea8de1c-merged.mount: Deactivated successfully.
Dec  1 05:18:19 np0005540825 podman[266811]: 2025-12-01 10:18:19.499292688 +0000 UTC m=+0.247435609 container remove 3d9afcd0fae9ff53e65c56d12d2a6e2f4813f686d523908c0059af04b8b4cfd3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_agnesi, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec  1 05:18:19 np0005540825 systemd[1]: libpod-conmon-3d9afcd0fae9ff53e65c56d12d2a6e2f4813f686d523908c0059af04b8b4cfd3.scope: Deactivated successfully.
Dec  1 05:18:19 np0005540825 podman[266852]: 2025-12-01 10:18:19.759565894 +0000 UTC m=+0.069668194 container create 292fab95a617bab3a29b2ce01e4a423f2825ad5c91b06f64ef7a20dea2554570 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:18:19 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v831: 353 pgs: 353 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  1 05:18:19 np0005540825 systemd[1]: Started libpod-conmon-292fab95a617bab3a29b2ce01e4a423f2825ad5c91b06f64ef7a20dea2554570.scope.
Dec  1 05:18:19 np0005540825 podman[266852]: 2025-12-01 10:18:19.729744788 +0000 UTC m=+0.039847148 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:18:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:18:19.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:19 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:18:19 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adae925c0967fd9bce5ae3fd8e4b7a45a3da4239a61f0d938e94d571495634f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:18:19 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adae925c0967fd9bce5ae3fd8e4b7a45a3da4239a61f0d938e94d571495634f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:18:19 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adae925c0967fd9bce5ae3fd8e4b7a45a3da4239a61f0d938e94d571495634f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:18:19 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adae925c0967fd9bce5ae3fd8e4b7a45a3da4239a61f0d938e94d571495634f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:18:19 np0005540825 podman[266852]: 2025-12-01 10:18:19.872500907 +0000 UTC m=+0.182603237 container init 292fab95a617bab3a29b2ce01e4a423f2825ad5c91b06f64ef7a20dea2554570 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_payne, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  1 05:18:19 np0005540825 podman[266852]: 2025-12-01 10:18:19.887941054 +0000 UTC m=+0.198043354 container start 292fab95a617bab3a29b2ce01e4a423f2825ad5c91b06f64ef7a20dea2554570 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_payne, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  1 05:18:19 np0005540825 podman[266852]: 2025-12-01 10:18:19.892371444 +0000 UTC m=+0.202473804 container attach 292fab95a617bab3a29b2ce01e4a423f2825ad5c91b06f64ef7a20dea2554570 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_payne, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  1 05:18:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:18:20.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:18:20 np0005540825 nova_compute[256151]: 2025-12-01 10:18:20.600 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:18:20 np0005540825 lvm[266943]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 05:18:20 np0005540825 lvm[266943]: VG ceph_vg0 finished
Dec  1 05:18:20 np0005540825 friendly_payne[266868]: {}
Dec  1 05:18:20 np0005540825 systemd[1]: libpod-292fab95a617bab3a29b2ce01e4a423f2825ad5c91b06f64ef7a20dea2554570.scope: Deactivated successfully.
Dec  1 05:18:20 np0005540825 systemd[1]: libpod-292fab95a617bab3a29b2ce01e4a423f2825ad5c91b06f64ef7a20dea2554570.scope: Consumed 1.360s CPU time.
Dec  1 05:18:20 np0005540825 podman[266852]: 2025-12-01 10:18:20.689912813 +0000 UTC m=+1.000015143 container died 292fab95a617bab3a29b2ce01e4a423f2825ad5c91b06f64ef7a20dea2554570 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:18:20 np0005540825 systemd[1]: var-lib-containers-storage-overlay-adae925c0967fd9bce5ae3fd8e4b7a45a3da4239a61f0d938e94d571495634f6-merged.mount: Deactivated successfully.
Dec  1 05:18:20 np0005540825 podman[266852]: 2025-12-01 10:18:20.746719148 +0000 UTC m=+1.056821408 container remove 292fab95a617bab3a29b2ce01e4a423f2825ad5c91b06f64ef7a20dea2554570 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_payne, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec  1 05:18:20 np0005540825 systemd[1]: libpod-conmon-292fab95a617bab3a29b2ce01e4a423f2825ad5c91b06f64ef7a20dea2554570.scope: Deactivated successfully.
Dec  1 05:18:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 05:18:20 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:18:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 05:18:20 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:18:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:18:21] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Dec  1 05:18:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:18:21] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Dec  1 05:18:21 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v832: 353 pgs: 353 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  1 05:18:21 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:18:21 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:18:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:18:21.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:18:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:18:22.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:18:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:18:23.655Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:18:23 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v833: 353 pgs: 353 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  1 05:18:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:18:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:18:23.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:18:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:18:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:18:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:18:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:24 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:18:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:18:24.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:24 np0005540825 podman[266986]: 2025-12-01 10:18:24.229209285 +0000 UTC m=+0.083564869 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125)
Dec  1 05:18:24 np0005540825 nova_compute[256151]: 2025-12-01 10:18:24.312 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:18:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:18:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:18:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:18:25 np0005540825 nova_compute[256151]: 2025-12-01 10:18:25.603 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:18:25 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v834: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 76 op/s
Dec  1 05:18:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:18:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:18:25.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:18:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:18:26.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:18:27.211Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:18:27 np0005540825 ceph-osd[82809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  1 05:18:27 np0005540825 ceph-osd[82809]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 2803 syncs, 3.86 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1606 writes, 4059 keys, 1606 commit groups, 1.0 writes per commit group, ingest: 3.17 MB, 0.01 MB/s#012Interval WAL: 1606 writes, 729 syncs, 2.20 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  1 05:18:27 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v835: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 739 KiB/s wr, 75 op/s
Dec  1 05:18:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:18:27.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:18:28.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:29 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:18:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:29 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:18:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:29 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:18:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:29 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:18:29 np0005540825 podman[267035]: 2025-12-01 10:18:29.215423452 +0000 UTC m=+0.068193958 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  1 05:18:29 np0005540825 nova_compute[256151]: 2025-12-01 10:18:29.315 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:18:29 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v836: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec  1 05:18:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:18:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:18:29.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:18:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:18:30.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:18:30 np0005540825 nova_compute[256151]: 2025-12-01 10:18:30.606 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:18:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:18:31] "GET /metrics HTTP/1.1" 200 48552 "" "Prometheus/2.51.0"
Dec  1 05:18:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:18:31] "GET /metrics HTTP/1.1" 200 48552 "" "Prometheus/2.51.0"
Dec  1 05:18:31 np0005540825 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 05:18:31 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v837: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec  1 05:18:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:18:31.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:18:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:18:32.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:18:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:18:33.656Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:18:33 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v838: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec  1 05:18:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:18:33.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:18:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:18:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:34 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:18:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:34 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:18:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:18:34.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:34 np0005540825 nova_compute[256151]: 2025-12-01 10:18:34.358 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:18:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:18:35 np0005540825 nova_compute[256151]: 2025-12-01 10:18:35.610 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:18:35 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v839: 353 pgs: 353 active+clean; 115 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.4 MiB/s wr, 131 op/s
Dec  1 05:18:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:18:35.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:18:36.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:18:37.212Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:18:37 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v840: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 89 op/s
Dec  1 05:18:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:18:37.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:18:38.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:18:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:39 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:18:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:39 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:18:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:39 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:18:39 np0005540825 podman[267066]: 2025-12-01 10:18:39.261357537 +0000 UTC m=+0.123615052 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 05:18:39 np0005540825 nova_compute[256151]: 2025-12-01 10:18:39.360 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_10:18:39
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['vms', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', '.mgr', 'volumes', '.nfs', 'backups']
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 05:18:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:18:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v841: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec  1 05:18:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:18:39.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00075666583235658 of space, bias 1.0, pg target 0.226999749706974 quantized to 32 (current 32)
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:18:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:18:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:18:40.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:18:40 np0005540825 nova_compute[256151]: 2025-12-01 10:18:40.650 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:18:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:18:41] "GET /metrics HTTP/1.1" 200 48558 "" "Prometheus/2.51.0"
Dec  1 05:18:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:18:41] "GET /metrics HTTP/1.1" 200 48558 "" "Prometheus/2.51.0"
Dec  1 05:18:41 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v842: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec  1 05:18:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:18:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:18:41.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:18:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:18:42.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:18:43.657Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:18:43 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v843: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec  1 05:18:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:18:43.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:18:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:18:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:18:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:44 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:18:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:18:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:18:44.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:18:44 np0005540825 nova_compute[256151]: 2025-12-01 10:18:44.405 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:18:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:18:45 np0005540825 nova_compute[256151]: 2025-12-01 10:18:45.653 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:18:45 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v844: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec  1 05:18:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:18:45.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:18:46.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:18:47.213Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:18:47 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v845: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 722 KiB/s wr, 8 op/s
Dec  1 05:18:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:18:47.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:18:48.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:49 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:18:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:49 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:18:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:49 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:18:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:49 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:18:49 np0005540825 nova_compute[256151]: 2025-12-01 10:18:49.409 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:18:49 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v846: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 11 KiB/s wr, 1 op/s
Dec  1 05:18:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:18:49.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:50 np0005540825 nova_compute[256151]: 2025-12-01 10:18:50.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:18:50 np0005540825 nova_compute[256151]: 2025-12-01 10:18:50.028 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 05:18:50 np0005540825 nova_compute[256151]: 2025-12-01 10:18:50.028 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 05:18:50 np0005540825 nova_compute[256151]: 2025-12-01 10:18:50.047 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 05:18:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:18:50.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:18:50 np0005540825 nova_compute[256151]: 2025-12-01 10:18:50.674 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:18:51 np0005540825 nova_compute[256151]: 2025-12-01 10:18:51.043 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:18:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:18:51] "GET /metrics HTTP/1.1" 200 48558 "" "Prometheus/2.51.0"
Dec  1 05:18:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:18:51] "GET /metrics HTTP/1.1" 200 48558 "" "Prometheus/2.51.0"
Dec  1 05:18:51 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v847: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 17 KiB/s wr, 1 op/s
Dec  1 05:18:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:18:51.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:52 np0005540825 nova_compute[256151]: 2025-12-01 10:18:52.028 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:18:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:18:52.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:53 np0005540825 nova_compute[256151]: 2025-12-01 10:18:53.026 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:18:53 np0005540825 nova_compute[256151]: 2025-12-01 10:18:53.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:18:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:18:53.658Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:18:53 np0005540825 nova_compute[256151]: 2025-12-01 10:18:53.769 256155 DEBUG oslo_concurrency.lockutils [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "e1cf90f4-8776-435c-9045-5e998a50cf01" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:18:53 np0005540825 nova_compute[256151]: 2025-12-01 10:18:53.769 256155 DEBUG oslo_concurrency.lockutils [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "e1cf90f4-8776-435c-9045-5e998a50cf01" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:18:53 np0005540825 nova_compute[256151]: 2025-12-01 10:18:53.799 256155 DEBUG nova.compute.manager [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 05:18:53 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v848: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 17 KiB/s wr, 1 op/s
Dec  1 05:18:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:18:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:18:53.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:18:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:18:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:18:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:18:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:54 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:18:54 np0005540825 nova_compute[256151]: 2025-12-01 10:18:54.026 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:18:54 np0005540825 nova_compute[256151]: 2025-12-01 10:18:54.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:18:54 np0005540825 nova_compute[256151]: 2025-12-01 10:18:54.028 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 05:18:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:18:54.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:54 np0005540825 nova_compute[256151]: 2025-12-01 10:18:54.195 256155 DEBUG oslo_concurrency.lockutils [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:18:54 np0005540825 nova_compute[256151]: 2025-12-01 10:18:54.196 256155 DEBUG oslo_concurrency.lockutils [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:18:54 np0005540825 nova_compute[256151]: 2025-12-01 10:18:54.212 256155 DEBUG nova.virt.hardware [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 05:18:54 np0005540825 nova_compute[256151]: 2025-12-01 10:18:54.212 256155 INFO nova.compute.claims [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 05:18:54 np0005540825 nova_compute[256151]: 2025-12-01 10:18:54.336 256155 DEBUG oslo_concurrency.processutils [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:18:54 np0005540825 nova_compute[256151]: 2025-12-01 10:18:54.411 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:18:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:18:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:18:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:18:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2612711508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:18:54 np0005540825 nova_compute[256151]: 2025-12-01 10:18:54.846 256155 DEBUG oslo_concurrency.processutils [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:18:54 np0005540825 nova_compute[256151]: 2025-12-01 10:18:54.854 256155 DEBUG nova.compute.provider_tree [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Inventory has not changed in ProviderTree for provider: 5efe20fe-1981-4bd9-8786-d9fddc89a5ae update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 05:18:54 np0005540825 nova_compute[256151]: 2025-12-01 10:18:54.872 256155 DEBUG nova.scheduler.client.report [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Inventory has not changed for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 05:18:54 np0005540825 nova_compute[256151]: 2025-12-01 10:18:54.907 256155 DEBUG oslo_concurrency.lockutils [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:18:54 np0005540825 nova_compute[256151]: 2025-12-01 10:18:54.909 256155 DEBUG nova.compute.manager [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 05:18:54 np0005540825 nova_compute[256151]: 2025-12-01 10:18:54.974 256155 DEBUG nova.compute.manager [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 05:18:54 np0005540825 nova_compute[256151]: 2025-12-01 10:18:54.974 256155 DEBUG nova.network.neutron [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 05:18:55 np0005540825 nova_compute[256151]: 2025-12-01 10:18:55.003 256155 INFO nova.virt.libvirt.driver [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 05:18:55 np0005540825 nova_compute[256151]: 2025-12-01 10:18:55.024 256155 DEBUG nova.compute.manager [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 05:18:55 np0005540825 nova_compute[256151]: 2025-12-01 10:18:55.144 256155 DEBUG nova.compute.manager [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 05:18:55 np0005540825 nova_compute[256151]: 2025-12-01 10:18:55.146 256155 DEBUG nova.virt.libvirt.driver [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 05:18:55 np0005540825 nova_compute[256151]: 2025-12-01 10:18:55.147 256155 INFO nova.virt.libvirt.driver [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Creating image(s)#033[00m
Dec  1 05:18:55 np0005540825 nova_compute[256151]: 2025-12-01 10:18:55.199 256155 DEBUG nova.storage.rbd_utils [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] rbd image e1cf90f4-8776-435c-9045-5e998a50cf01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  1 05:18:55 np0005540825 nova_compute[256151]: 2025-12-01 10:18:55.251 256155 DEBUG nova.storage.rbd_utils [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] rbd image e1cf90f4-8776-435c-9045-5e998a50cf01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  1 05:18:55 np0005540825 podman[267156]: 2025-12-01 10:18:55.260852852 +0000 UTC m=+0.119056593 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Dec  1 05:18:55 np0005540825 nova_compute[256151]: 2025-12-01 10:18:55.295 256155 DEBUG nova.storage.rbd_utils [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] rbd image e1cf90f4-8776-435c-9045-5e998a50cf01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  1 05:18:55 np0005540825 nova_compute[256151]: 2025-12-01 10:18:55.300 256155 DEBUG oslo_concurrency.processutils [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/caad95fa2cc8ed03bed2e9851744954b07ec7b34 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:18:55 np0005540825 nova_compute[256151]: 2025-12-01 10:18:55.326 256155 DEBUG nova.policy [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5b56a238daf0445798410e51caada0ff', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9f6be4e572624210b91193c011607c08', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  1 05:18:55 np0005540825 nova_compute[256151]: 2025-12-01 10:18:55.374 256155 DEBUG oslo_concurrency.processutils [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/caad95fa2cc8ed03bed2e9851744954b07ec7b34 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:18:55 np0005540825 nova_compute[256151]: 2025-12-01 10:18:55.375 256155 DEBUG oslo_concurrency.lockutils [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "caad95fa2cc8ed03bed2e9851744954b07ec7b34" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:18:55 np0005540825 nova_compute[256151]: 2025-12-01 10:18:55.377 256155 DEBUG oslo_concurrency.lockutils [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "caad95fa2cc8ed03bed2e9851744954b07ec7b34" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:18:55 np0005540825 nova_compute[256151]: 2025-12-01 10:18:55.378 256155 DEBUG oslo_concurrency.lockutils [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "caad95fa2cc8ed03bed2e9851744954b07ec7b34" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:18:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:18:55 np0005540825 nova_compute[256151]: 2025-12-01 10:18:55.423 256155 DEBUG nova.storage.rbd_utils [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] rbd image e1cf90f4-8776-435c-9045-5e998a50cf01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  1 05:18:55 np0005540825 nova_compute[256151]: 2025-12-01 10:18:55.429 256155 DEBUG oslo_concurrency.processutils [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/caad95fa2cc8ed03bed2e9851744954b07ec7b34 e1cf90f4-8776-435c-9045-5e998a50cf01_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:18:55 np0005540825 nova_compute[256151]: 2025-12-01 10:18:55.677 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:18:55 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v849: 353 pgs: 353 active+clean; 134 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 662 KiB/s wr, 12 op/s
Dec  1 05:18:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:18:55.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:55 np0005540825 nova_compute[256151]: 2025-12-01 10:18:55.928 256155 DEBUG oslo_concurrency.processutils [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/caad95fa2cc8ed03bed2e9851744954b07ec7b34 e1cf90f4-8776-435c-9045-5e998a50cf01_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:18:56 np0005540825 nova_compute[256151]: 2025-12-01 10:18:56.046 256155 DEBUG nova.storage.rbd_utils [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] resizing rbd image e1cf90f4-8776-435c-9045-5e998a50cf01_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  1 05:18:56 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:18:56.066 163291 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '36:10:da', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '4e:5c:35:98:90:37'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 05:18:56 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:18:56.070 163291 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 05:18:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:18:56.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:56 np0005540825 nova_compute[256151]: 2025-12-01 10:18:56.098 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:18:56 np0005540825 nova_compute[256151]: 2025-12-01 10:18:56.188 256155 DEBUG nova.network.neutron [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Successfully created port: 00708a0f-61a2-499a-8116-e51af4ea857a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  1 05:18:56 np0005540825 nova_compute[256151]: 2025-12-01 10:18:56.199 256155 DEBUG nova.objects.instance [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lazy-loading 'migration_context' on Instance uuid e1cf90f4-8776-435c-9045-5e998a50cf01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 05:18:56 np0005540825 nova_compute[256151]: 2025-12-01 10:18:56.217 256155 DEBUG nova.virt.libvirt.driver [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 05:18:56 np0005540825 nova_compute[256151]: 2025-12-01 10:18:56.217 256155 DEBUG nova.virt.libvirt.driver [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Ensure instance console log exists: /var/lib/nova/instances/e1cf90f4-8776-435c-9045-5e998a50cf01/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 05:18:56 np0005540825 nova_compute[256151]: 2025-12-01 10:18:56.218 256155 DEBUG oslo_concurrency.lockutils [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:18:56 np0005540825 nova_compute[256151]: 2025-12-01 10:18:56.219 256155 DEBUG oslo_concurrency.lockutils [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:18:56 np0005540825 nova_compute[256151]: 2025-12-01 10:18:56.219 256155 DEBUG oslo_concurrency.lockutils [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:18:56 np0005540825 nova_compute[256151]: 2025-12-01 10:18:56.973 256155 DEBUG nova.network.neutron [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Successfully updated port: 00708a0f-61a2-499a-8116-e51af4ea857a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 05:18:56 np0005540825 nova_compute[256151]: 2025-12-01 10:18:56.988 256155 DEBUG oslo_concurrency.lockutils [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "refresh_cache-e1cf90f4-8776-435c-9045-5e998a50cf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 05:18:56 np0005540825 nova_compute[256151]: 2025-12-01 10:18:56.988 256155 DEBUG oslo_concurrency.lockutils [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquired lock "refresh_cache-e1cf90f4-8776-435c-9045-5e998a50cf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 05:18:56 np0005540825 nova_compute[256151]: 2025-12-01 10:18:56.988 256155 DEBUG nova.network.neutron [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 05:18:57 np0005540825 nova_compute[256151]: 2025-12-01 10:18:57.026 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:18:57 np0005540825 nova_compute[256151]: 2025-12-01 10:18:57.051 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:18:57 np0005540825 nova_compute[256151]: 2025-12-01 10:18:57.052 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:18:57 np0005540825 nova_compute[256151]: 2025-12-01 10:18:57.052 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:18:57 np0005540825 nova_compute[256151]: 2025-12-01 10:18:57.053 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 05:18:57 np0005540825 nova_compute[256151]: 2025-12-01 10:18:57.053 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:18:57 np0005540825 nova_compute[256151]: 2025-12-01 10:18:57.115 256155 DEBUG nova.compute.manager [req-e8b44502-31a9-4ebb-8313-e029e5066fed req-c9bfc9a8-cf1e-459f-a886-56404c859f8f dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Received event network-changed-00708a0f-61a2-499a-8116-e51af4ea857a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:18:57 np0005540825 nova_compute[256151]: 2025-12-01 10:18:57.116 256155 DEBUG nova.compute.manager [req-e8b44502-31a9-4ebb-8313-e029e5066fed req-c9bfc9a8-cf1e-459f-a886-56404c859f8f dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Refreshing instance network info cache due to event network-changed-00708a0f-61a2-499a-8116-e51af4ea857a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 05:18:57 np0005540825 nova_compute[256151]: 2025-12-01 10:18:57.116 256155 DEBUG oslo_concurrency.lockutils [req-e8b44502-31a9-4ebb-8313-e029e5066fed req-c9bfc9a8-cf1e-459f-a886-56404c859f8f dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "refresh_cache-e1cf90f4-8776-435c-9045-5e998a50cf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 05:18:57 np0005540825 nova_compute[256151]: 2025-12-01 10:18:57.181 256155 DEBUG nova.network.neutron [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 05:18:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:18:57.214Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:18:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:18:57.214Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:18:57 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:18:57 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2091576883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:18:57 np0005540825 nova_compute[256151]: 2025-12-01 10:18:57.568 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:18:57 np0005540825 nova_compute[256151]: 2025-12-01 10:18:57.823 256155 WARNING nova.virt.libvirt.driver [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 05:18:57 np0005540825 nova_compute[256151]: 2025-12-01 10:18:57.825 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4612MB free_disk=59.93333053588867GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 05:18:57 np0005540825 nova_compute[256151]: 2025-12-01 10:18:57.825 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:18:57 np0005540825 nova_compute[256151]: 2025-12-01 10:18:57.825 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:18:57 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v850: 353 pgs: 353 active+clean; 137 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 7.1 KiB/s rd, 661 KiB/s wr, 12 op/s
Dec  1 05:18:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:18:57.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:18:58.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:18:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:18:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:18:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:18:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:18:59 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:18:59 np0005540825 nova_compute[256151]: 2025-12-01 10:18:59.223 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Instance e1cf90f4-8776-435c-9045-5e998a50cf01 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 05:18:59 np0005540825 nova_compute[256151]: 2025-12-01 10:18:59.224 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 05:18:59 np0005540825 nova_compute[256151]: 2025-12-01 10:18:59.224 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 05:18:59 np0005540825 nova_compute[256151]: 2025-12-01 10:18:59.275 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:18:59 np0005540825 nova_compute[256151]: 2025-12-01 10:18:59.309 256155 DEBUG nova.network.neutron [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Updating instance_info_cache with network_info: [{"id": "00708a0f-61a2-499a-8116-e51af4ea857a", "address": "fa:16:3e:93:28:e2", "network": {"id": "9724ce57-aa56-45bd-89b7-afc7e1797626", "bridge": "br-int", "label": "tempest-network-smoke--1740338665", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00708a0f-61", "ovs_interfaceid": "00708a0f-61a2-499a-8116-e51af4ea857a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 05:18:59 np0005540825 nova_compute[256151]: 2025-12-01 10:18:59.414 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:18:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:18:59 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3707188935' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:18:59 np0005540825 nova_compute[256151]: 2025-12-01 10:18:59.757 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:18:59 np0005540825 nova_compute[256151]: 2025-12-01 10:18:59.765 256155 DEBUG nova.compute.provider_tree [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed in ProviderTree for provider: 5efe20fe-1981-4bd9-8786-d9fddc89a5ae update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 05:18:59 np0005540825 nova_compute[256151]: 2025-12-01 10:18:59.774 256155 DEBUG oslo_concurrency.lockutils [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Releasing lock "refresh_cache-e1cf90f4-8776-435c-9045-5e998a50cf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 05:18:59 np0005540825 nova_compute[256151]: 2025-12-01 10:18:59.774 256155 DEBUG nova.compute.manager [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Instance network_info: |[{"id": "00708a0f-61a2-499a-8116-e51af4ea857a", "address": "fa:16:3e:93:28:e2", "network": {"id": "9724ce57-aa56-45bd-89b7-afc7e1797626", "bridge": "br-int", "label": "tempest-network-smoke--1740338665", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00708a0f-61", "ovs_interfaceid": "00708a0f-61a2-499a-8116-e51af4ea857a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 05:18:59 np0005540825 nova_compute[256151]: 2025-12-01 10:18:59.775 256155 DEBUG oslo_concurrency.lockutils [req-e8b44502-31a9-4ebb-8313-e029e5066fed req-c9bfc9a8-cf1e-459f-a886-56404c859f8f dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquired lock "refresh_cache-e1cf90f4-8776-435c-9045-5e998a50cf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 05:18:59 np0005540825 nova_compute[256151]: 2025-12-01 10:18:59.776 256155 DEBUG nova.network.neutron [req-e8b44502-31a9-4ebb-8313-e029e5066fed req-c9bfc9a8-cf1e-459f-a886-56404c859f8f dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Refreshing network info cache for port 00708a0f-61a2-499a-8116-e51af4ea857a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 05:18:59 np0005540825 nova_compute[256151]: 2025-12-01 10:18:59.781 256155 DEBUG nova.virt.libvirt.driver [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Start _get_guest_xml network_info=[{"id": "00708a0f-61a2-499a-8116-e51af4ea857a", "address": "fa:16:3e:93:28:e2", "network": {"id": "9724ce57-aa56-45bd-89b7-afc7e1797626", "bridge": "br-int", "label": "tempest-network-smoke--1740338665", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00708a0f-61", "ovs_interfaceid": "00708a0f-61a2-499a-8116-e51af4ea857a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T10:14:19Z,direct_url=<?>,disk_format='qcow2',id=8f75d6de-6ce0-44e1-b417-d0111424475b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9a5734898a6345909986f17ddf57b27d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T10:14:22Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': '8f75d6de-6ce0-44e1-b417-d0111424475b'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 05:18:59 np0005540825 nova_compute[256151]: 2025-12-01 10:18:59.785 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 05:18:59 np0005540825 nova_compute[256151]: 2025-12-01 10:18:59.795 256155 WARNING nova.virt.libvirt.driver [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 05:18:59 np0005540825 nova_compute[256151]: 2025-12-01 10:18:59.804 256155 DEBUG nova.virt.libvirt.host [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 05:18:59 np0005540825 nova_compute[256151]: 2025-12-01 10:18:59.805 256155 DEBUG nova.virt.libvirt.host [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 05:18:59 np0005540825 nova_compute[256151]: 2025-12-01 10:18:59.813 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 05:18:59 np0005540825 nova_compute[256151]: 2025-12-01 10:18:59.814 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.988s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:18:59 np0005540825 nova_compute[256151]: 2025-12-01 10:18:59.816 256155 DEBUG nova.virt.libvirt.host [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 05:18:59 np0005540825 nova_compute[256151]: 2025-12-01 10:18:59.817 256155 DEBUG nova.virt.libvirt.host [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 05:18:59 np0005540825 nova_compute[256151]: 2025-12-01 10:18:59.818 256155 DEBUG nova.virt.libvirt.driver [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 05:18:59 np0005540825 nova_compute[256151]: 2025-12-01 10:18:59.818 256155 DEBUG nova.virt.hardware [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T10:14:17Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2e731827-1896-49cd-b0cc-12903555d217',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T10:14:19Z,direct_url=<?>,disk_format='qcow2',id=8f75d6de-6ce0-44e1-b417-d0111424475b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9a5734898a6345909986f17ddf57b27d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T10:14:22Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 05:18:59 np0005540825 nova_compute[256151]: 2025-12-01 10:18:59.819 256155 DEBUG nova.virt.hardware [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 05:18:59 np0005540825 nova_compute[256151]: 2025-12-01 10:18:59.819 256155 DEBUG nova.virt.hardware [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 05:18:59 np0005540825 nova_compute[256151]: 2025-12-01 10:18:59.820 256155 DEBUG nova.virt.hardware [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 05:18:59 np0005540825 nova_compute[256151]: 2025-12-01 10:18:59.820 256155 DEBUG nova.virt.hardware [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 05:18:59 np0005540825 nova_compute[256151]: 2025-12-01 10:18:59.821 256155 DEBUG nova.virt.hardware [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 05:18:59 np0005540825 nova_compute[256151]: 2025-12-01 10:18:59.821 256155 DEBUG nova.virt.hardware [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 05:18:59 np0005540825 nova_compute[256151]: 2025-12-01 10:18:59.822 256155 DEBUG nova.virt.hardware [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 05:18:59 np0005540825 nova_compute[256151]: 2025-12-01 10:18:59.822 256155 DEBUG nova.virt.hardware [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 05:18:59 np0005540825 nova_compute[256151]: 2025-12-01 10:18:59.823 256155 DEBUG nova.virt.hardware [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 05:18:59 np0005540825 nova_compute[256151]: 2025-12-01 10:18:59.823 256155 DEBUG nova.virt.hardware [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 05:18:59 np0005540825 nova_compute[256151]: 2025-12-01 10:18:59.828 256155 DEBUG oslo_concurrency.processutils [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:18:59 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v851: 353 pgs: 353 active+clean; 137 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 6.8 KiB/s rd, 661 KiB/s wr, 12 op/s
Dec  1 05:18:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:18:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:18:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:18:59.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:19:00 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:00.073 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4d9738cf-2abf-48e2-9303-677669784912, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:19:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:19:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:19:00.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:19:00 np0005540825 podman[267411]: 2025-12-01 10:19:00.224495063 +0000 UTC m=+0.083671861 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Dec  1 05:19:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  1 05:19:00 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3765291121' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  1 05:19:00 np0005540825 nova_compute[256151]: 2025-12-01 10:19:00.310 256155 DEBUG oslo_concurrency.processutils [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:19:00 np0005540825 nova_compute[256151]: 2025-12-01 10:19:00.343 256155 DEBUG nova.storage.rbd_utils [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] rbd image e1cf90f4-8776-435c-9045-5e998a50cf01_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  1 05:19:00 np0005540825 nova_compute[256151]: 2025-12-01 10:19:00.349 256155 DEBUG oslo_concurrency.processutils [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:19:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:19:00 np0005540825 nova_compute[256151]: 2025-12-01 10:19:00.680 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  1 05:19:00 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3067651324' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  1 05:19:00 np0005540825 nova_compute[256151]: 2025-12-01 10:19:00.919 256155 DEBUG oslo_concurrency.processutils [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:19:00 np0005540825 nova_compute[256151]: 2025-12-01 10:19:00.921 256155 DEBUG nova.virt.libvirt.vif [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T10:18:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1833698453',display_name='tempest-TestNetworkBasicOps-server-1833698453',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1833698453',id=5,image_ref='8f75d6de-6ce0-44e1-b417-d0111424475b',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIErBHbuun640R6FmJNfZg22QQshsQmGa3yao92C0pcKaeLgkSpmZBu4bAVuunPnHp7ytS52KtYD3dQ4T4GtdUYHibJX4j/vFO5DWMIMfeEb0tEyzSM2o0ebzGN+Oiw67A==',key_name='tempest-TestNetworkBasicOps-18502438',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9f6be4e572624210b91193c011607c08',ramdisk_id='',reservation_id='r-n4a3j5m5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8f75d6de-6ce0-44e1-b417-d0111424475b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1248115384',owner_user_name='tempest-TestNetworkBasicOps-1248115384-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T10:18:55Z,user_data=None,user_id='5b56a238daf0445798410e51caada0ff',uuid=e1cf90f4-8776-435c-9045-5e998a50cf01,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "00708a0f-61a2-499a-8116-e51af4ea857a", "address": "fa:16:3e:93:28:e2", "network": {"id": "9724ce57-aa56-45bd-89b7-afc7e1797626", "bridge": "br-int", "label": "tempest-network-smoke--1740338665", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00708a0f-61", "ovs_interfaceid": "00708a0f-61a2-499a-8116-e51af4ea857a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 05:19:00 np0005540825 nova_compute[256151]: 2025-12-01 10:19:00.922 256155 DEBUG nova.network.os_vif_util [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Converting VIF {"id": "00708a0f-61a2-499a-8116-e51af4ea857a", "address": "fa:16:3e:93:28:e2", "network": {"id": "9724ce57-aa56-45bd-89b7-afc7e1797626", "bridge": "br-int", "label": "tempest-network-smoke--1740338665", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00708a0f-61", "ovs_interfaceid": "00708a0f-61a2-499a-8116-e51af4ea857a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 05:19:00 np0005540825 nova_compute[256151]: 2025-12-01 10:19:00.924 256155 DEBUG nova.network.os_vif_util [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:28:e2,bridge_name='br-int',has_traffic_filtering=True,id=00708a0f-61a2-499a-8116-e51af4ea857a,network=Network(9724ce57-aa56-45bd-89b7-afc7e1797626),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00708a0f-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 05:19:00 np0005540825 nova_compute[256151]: 2025-12-01 10:19:00.926 256155 DEBUG nova.objects.instance [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lazy-loading 'pci_devices' on Instance uuid e1cf90f4-8776-435c-9045-5e998a50cf01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 05:19:01 np0005540825 nova_compute[256151]: 2025-12-01 10:19:01.298 256155 DEBUG nova.virt.libvirt.driver [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] End _get_guest_xml xml=<domain type="kvm">
Dec  1 05:19:01 np0005540825 nova_compute[256151]:  <uuid>e1cf90f4-8776-435c-9045-5e998a50cf01</uuid>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:  <name>instance-00000005</name>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:  <memory>131072</memory>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:  <vcpu>1</vcpu>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:  <metadata>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 05:19:01 np0005540825 nova_compute[256151]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:      <nova:name>tempest-TestNetworkBasicOps-server-1833698453</nova:name>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:      <nova:creationTime>2025-12-01 10:18:59</nova:creationTime>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:      <nova:flavor name="m1.nano">
Dec  1 05:19:01 np0005540825 nova_compute[256151]:        <nova:memory>128</nova:memory>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:        <nova:disk>1</nova:disk>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:        <nova:swap>0</nova:swap>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:        <nova:ephemeral>0</nova:ephemeral>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:        <nova:vcpus>1</nova:vcpus>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:      </nova:flavor>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:      <nova:owner>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:        <nova:user uuid="5b56a238daf0445798410e51caada0ff">tempest-TestNetworkBasicOps-1248115384-project-member</nova:user>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:        <nova:project uuid="9f6be4e572624210b91193c011607c08">tempest-TestNetworkBasicOps-1248115384</nova:project>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:      </nova:owner>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:      <nova:root type="image" uuid="8f75d6de-6ce0-44e1-b417-d0111424475b"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:      <nova:ports>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:        <nova:port uuid="00708a0f-61a2-499a-8116-e51af4ea857a">
Dec  1 05:19:01 np0005540825 nova_compute[256151]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:        </nova:port>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:      </nova:ports>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    </nova:instance>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:  </metadata>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:  <sysinfo type="smbios">
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <system>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:      <entry name="manufacturer">RDO</entry>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:      <entry name="product">OpenStack Compute</entry>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:      <entry name="serial">e1cf90f4-8776-435c-9045-5e998a50cf01</entry>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:      <entry name="uuid">e1cf90f4-8776-435c-9045-5e998a50cf01</entry>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:      <entry name="family">Virtual Machine</entry>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    </system>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:  </sysinfo>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:  <os>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <boot dev="hd"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <smbios mode="sysinfo"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:  </os>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:  <features>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <acpi/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <apic/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <vmcoreinfo/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:  </features>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:  <clock offset="utc">
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <timer name="hpet" present="no"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:  </clock>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:  <cpu mode="host-model" match="exact">
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:  </cpu>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:  <devices>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <disk type="network" device="disk">
Dec  1 05:19:01 np0005540825 nova_compute[256151]:      <driver type="raw" cache="none"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:      <source protocol="rbd" name="vms/e1cf90f4-8776-435c-9045-5e998a50cf01_disk">
Dec  1 05:19:01 np0005540825 nova_compute[256151]:        <host name="192.168.122.100" port="6789"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:        <host name="192.168.122.102" port="6789"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:        <host name="192.168.122.101" port="6789"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:      </source>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:      <auth username="openstack">
Dec  1 05:19:01 np0005540825 nova_compute[256151]:        <secret type="ceph" uuid="365f19c2-81e5-5edd-b6b4-280555214d3a"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:      </auth>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:      <target dev="vda" bus="virtio"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    </disk>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <disk type="network" device="cdrom">
Dec  1 05:19:01 np0005540825 nova_compute[256151]:      <driver type="raw" cache="none"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:      <source protocol="rbd" name="vms/e1cf90f4-8776-435c-9045-5e998a50cf01_disk.config">
Dec  1 05:19:01 np0005540825 nova_compute[256151]:        <host name="192.168.122.100" port="6789"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:        <host name="192.168.122.102" port="6789"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:        <host name="192.168.122.101" port="6789"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:      </source>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:      <auth username="openstack">
Dec  1 05:19:01 np0005540825 nova_compute[256151]:        <secret type="ceph" uuid="365f19c2-81e5-5edd-b6b4-280555214d3a"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:      </auth>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:      <target dev="sda" bus="sata"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    </disk>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <interface type="ethernet">
Dec  1 05:19:01 np0005540825 nova_compute[256151]:      <mac address="fa:16:3e:93:28:e2"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:      <model type="virtio"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:      <mtu size="1442"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:      <target dev="tap00708a0f-61"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    </interface>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <serial type="pty">
Dec  1 05:19:01 np0005540825 nova_compute[256151]:      <log file="/var/lib/nova/instances/e1cf90f4-8776-435c-9045-5e998a50cf01/console.log" append="off"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    </serial>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <video>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:      <model type="virtio"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    </video>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <input type="tablet" bus="usb"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <rng model="virtio">
Dec  1 05:19:01 np0005540825 nova_compute[256151]:      <backend model="random">/dev/urandom</backend>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    </rng>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <controller type="usb" index="0"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    <memballoon model="virtio">
Dec  1 05:19:01 np0005540825 nova_compute[256151]:      <stats period="10"/>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:    </memballoon>
Dec  1 05:19:01 np0005540825 nova_compute[256151]:  </devices>
Dec  1 05:19:01 np0005540825 nova_compute[256151]: </domain>
Dec  1 05:19:01 np0005540825 nova_compute[256151]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 05:19:01 np0005540825 nova_compute[256151]: 2025-12-01 10:19:01.299 256155 DEBUG nova.compute.manager [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Preparing to wait for external event network-vif-plugged-00708a0f-61a2-499a-8116-e51af4ea857a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 05:19:01 np0005540825 nova_compute[256151]: 2025-12-01 10:19:01.300 256155 DEBUG oslo_concurrency.lockutils [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "e1cf90f4-8776-435c-9045-5e998a50cf01-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:19:01 np0005540825 nova_compute[256151]: 2025-12-01 10:19:01.300 256155 DEBUG oslo_concurrency.lockutils [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "e1cf90f4-8776-435c-9045-5e998a50cf01-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:19:01 np0005540825 nova_compute[256151]: 2025-12-01 10:19:01.301 256155 DEBUG oslo_concurrency.lockutils [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "e1cf90f4-8776-435c-9045-5e998a50cf01-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:19:01 np0005540825 nova_compute[256151]: 2025-12-01 10:19:01.302 256155 DEBUG nova.virt.libvirt.vif [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T10:18:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1833698453',display_name='tempest-TestNetworkBasicOps-server-1833698453',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1833698453',id=5,image_ref='8f75d6de-6ce0-44e1-b417-d0111424475b',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIErBHbuun640R6FmJNfZg22QQshsQmGa3yao92C0pcKaeLgkSpmZBu4bAVuunPnHp7ytS52KtYD3dQ4T4GtdUYHibJX4j/vFO5DWMIMfeEb0tEyzSM2o0ebzGN+Oiw67A==',key_name='tempest-TestNetworkBasicOps-18502438',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9f6be4e572624210b91193c011607c08',ramdisk_id='',reservation_id='r-n4a3j5m5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8f75d6de-6ce0-44e1-b417-d0111424475b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1248115384',owner_user_name='tempest-TestNetworkBasicOps-1248115384-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T10:18:55Z,user_data=None,user_id='5b56a238daf0445798410e51caada0ff',uuid=e1cf90f4-8776-435c-9045-5e998a50cf01,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "00708a0f-61a2-499a-8116-e51af4ea857a", "address": "fa:16:3e:93:28:e2", "network": {"id": "9724ce57-aa56-45bd-89b7-afc7e1797626", "bridge": "br-int", "label": "tempest-network-smoke--1740338665", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00708a0f-61", "ovs_interfaceid": "00708a0f-61a2-499a-8116-e51af4ea857a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 05:19:01 np0005540825 nova_compute[256151]: 2025-12-01 10:19:01.302 256155 DEBUG nova.network.os_vif_util [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Converting VIF {"id": "00708a0f-61a2-499a-8116-e51af4ea857a", "address": "fa:16:3e:93:28:e2", "network": {"id": "9724ce57-aa56-45bd-89b7-afc7e1797626", "bridge": "br-int", "label": "tempest-network-smoke--1740338665", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00708a0f-61", "ovs_interfaceid": "00708a0f-61a2-499a-8116-e51af4ea857a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 05:19:01 np0005540825 nova_compute[256151]: 2025-12-01 10:19:01.303 256155 DEBUG nova.network.os_vif_util [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:28:e2,bridge_name='br-int',has_traffic_filtering=True,id=00708a0f-61a2-499a-8116-e51af4ea857a,network=Network(9724ce57-aa56-45bd-89b7-afc7e1797626),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00708a0f-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 05:19:01 np0005540825 nova_compute[256151]: 2025-12-01 10:19:01.304 256155 DEBUG os_vif [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:28:e2,bridge_name='br-int',has_traffic_filtering=True,id=00708a0f-61a2-499a-8116-e51af4ea857a,network=Network(9724ce57-aa56-45bd-89b7-afc7e1797626),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00708a0f-61') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 05:19:01 np0005540825 nova_compute[256151]: 2025-12-01 10:19:01.305 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:01 np0005540825 nova_compute[256151]: 2025-12-01 10:19:01.305 256155 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:19:01 np0005540825 nova_compute[256151]: 2025-12-01 10:19:01.306 256155 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 05:19:01 np0005540825 nova_compute[256151]: 2025-12-01 10:19:01.311 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:01 np0005540825 nova_compute[256151]: 2025-12-01 10:19:01.312 256155 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00708a0f-61, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:19:01 np0005540825 nova_compute[256151]: 2025-12-01 10:19:01.312 256155 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap00708a0f-61, col_values=(('external_ids', {'iface-id': '00708a0f-61a2-499a-8116-e51af4ea857a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:93:28:e2', 'vm-uuid': 'e1cf90f4-8776-435c-9045-5e998a50cf01'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:19:01 np0005540825 nova_compute[256151]: 2025-12-01 10:19:01.314 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:01 np0005540825 NetworkManager[48963]: <info>  [1764584341.3165] manager: (tap00708a0f-61): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Dec  1 05:19:01 np0005540825 nova_compute[256151]: 2025-12-01 10:19:01.317 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 05:19:01 np0005540825 nova_compute[256151]: 2025-12-01 10:19:01.327 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:01 np0005540825 nova_compute[256151]: 2025-12-01 10:19:01.330 256155 INFO os_vif [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:28:e2,bridge_name='br-int',has_traffic_filtering=True,id=00708a0f-61a2-499a-8116-e51af4ea857a,network=Network(9724ce57-aa56-45bd-89b7-afc7e1797626),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00708a0f-61')#033[00m
Dec  1 05:19:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:19:01] "GET /metrics HTTP/1.1" 200 48557 "" "Prometheus/2.51.0"
Dec  1 05:19:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:19:01] "GET /metrics HTTP/1.1" 200 48557 "" "Prometheus/2.51.0"
Dec  1 05:19:01 np0005540825 nova_compute[256151]: 2025-12-01 10:19:01.408 256155 DEBUG nova.virt.libvirt.driver [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 05:19:01 np0005540825 nova_compute[256151]: 2025-12-01 10:19:01.408 256155 DEBUG nova.virt.libvirt.driver [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 05:19:01 np0005540825 nova_compute[256151]: 2025-12-01 10:19:01.409 256155 DEBUG nova.virt.libvirt.driver [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] No VIF found with MAC fa:16:3e:93:28:e2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 05:19:01 np0005540825 nova_compute[256151]: 2025-12-01 10:19:01.410 256155 INFO nova.virt.libvirt.driver [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Using config drive#033[00m
Dec  1 05:19:01 np0005540825 nova_compute[256151]: 2025-12-01 10:19:01.451 256155 DEBUG nova.storage.rbd_utils [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] rbd image e1cf90f4-8776-435c-9045-5e998a50cf01_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  1 05:19:01 np0005540825 nova_compute[256151]: 2025-12-01 10:19:01.816 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:19:01 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v852: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec  1 05:19:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:19:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:19:01.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:19:01 np0005540825 nova_compute[256151]: 2025-12-01 10:19:01.926 256155 DEBUG nova.network.neutron [req-e8b44502-31a9-4ebb-8313-e029e5066fed req-c9bfc9a8-cf1e-459f-a886-56404c859f8f dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Updated VIF entry in instance network info cache for port 00708a0f-61a2-499a-8116-e51af4ea857a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 05:19:01 np0005540825 nova_compute[256151]: 2025-12-01 10:19:01.926 256155 DEBUG nova.network.neutron [req-e8b44502-31a9-4ebb-8313-e029e5066fed req-c9bfc9a8-cf1e-459f-a886-56404c859f8f dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Updating instance_info_cache with network_info: [{"id": "00708a0f-61a2-499a-8116-e51af4ea857a", "address": "fa:16:3e:93:28:e2", "network": {"id": "9724ce57-aa56-45bd-89b7-afc7e1797626", "bridge": "br-int", "label": "tempest-network-smoke--1740338665", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00708a0f-61", "ovs_interfaceid": "00708a0f-61a2-499a-8116-e51af4ea857a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 05:19:02 np0005540825 nova_compute[256151]: 2025-12-01 10:19:02.013 256155 DEBUG oslo_concurrency.lockutils [req-e8b44502-31a9-4ebb-8313-e029e5066fed req-c9bfc9a8-cf1e-459f-a886-56404c859f8f dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Releasing lock "refresh_cache-e1cf90f4-8776-435c-9045-5e998a50cf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 05:19:02 np0005540825 nova_compute[256151]: 2025-12-01 10:19:02.102 256155 INFO nova.virt.libvirt.driver [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Creating config drive at /var/lib/nova/instances/e1cf90f4-8776-435c-9045-5e998a50cf01/disk.config#033[00m
Dec  1 05:19:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:19:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:19:02.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:19:02 np0005540825 nova_compute[256151]: 2025-12-01 10:19:02.111 256155 DEBUG oslo_concurrency.processutils [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e1cf90f4-8776-435c-9045-5e998a50cf01/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphia4s_86 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:19:02 np0005540825 nova_compute[256151]: 2025-12-01 10:19:02.246 256155 DEBUG oslo_concurrency.processutils [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e1cf90f4-8776-435c-9045-5e998a50cf01/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphia4s_86" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:19:02 np0005540825 nova_compute[256151]: 2025-12-01 10:19:02.290 256155 DEBUG nova.storage.rbd_utils [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] rbd image e1cf90f4-8776-435c-9045-5e998a50cf01_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  1 05:19:02 np0005540825 nova_compute[256151]: 2025-12-01 10:19:02.295 256155 DEBUG oslo_concurrency.processutils [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e1cf90f4-8776-435c-9045-5e998a50cf01/disk.config e1cf90f4-8776-435c-9045-5e998a50cf01_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:19:02 np0005540825 nova_compute[256151]: 2025-12-01 10:19:02.702 256155 DEBUG oslo_concurrency.processutils [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e1cf90f4-8776-435c-9045-5e998a50cf01/disk.config e1cf90f4-8776-435c-9045-5e998a50cf01_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:19:02 np0005540825 nova_compute[256151]: 2025-12-01 10:19:02.703 256155 INFO nova.virt.libvirt.driver [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Deleting local config drive /var/lib/nova/instances/e1cf90f4-8776-435c-9045-5e998a50cf01/disk.config because it was imported into RBD.#033[00m
Dec  1 05:19:02 np0005540825 systemd[1]: Starting libvirt secret daemon...
Dec  1 05:19:02 np0005540825 systemd[1]: Started libvirt secret daemon.
Dec  1 05:19:02 np0005540825 kernel: tap00708a0f-61: entered promiscuous mode
Dec  1 05:19:02 np0005540825 NetworkManager[48963]: <info>  [1764584342.8406] manager: (tap00708a0f-61): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Dec  1 05:19:02 np0005540825 ovn_controller[153404]: 2025-12-01T10:19:02Z|00039|binding|INFO|Claiming lport 00708a0f-61a2-499a-8116-e51af4ea857a for this chassis.
Dec  1 05:19:02 np0005540825 ovn_controller[153404]: 2025-12-01T10:19:02Z|00040|binding|INFO|00708a0f-61a2-499a-8116-e51af4ea857a: Claiming fa:16:3e:93:28:e2 10.100.0.5
Dec  1 05:19:02 np0005540825 nova_compute[256151]: 2025-12-01 10:19:02.841 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:02 np0005540825 nova_compute[256151]: 2025-12-01 10:19:02.860 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:02 np0005540825 systemd-machined[216307]: New machine qemu-2-instance-00000005.
Dec  1 05:19:02 np0005540825 ovn_controller[153404]: 2025-12-01T10:19:02Z|00041|binding|INFO|Setting lport 00708a0f-61a2-499a-8116-e51af4ea857a ovn-installed in OVS
Dec  1 05:19:02 np0005540825 nova_compute[256151]: 2025-12-01 10:19:02.947 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:02 np0005540825 systemd[1]: Started Virtual Machine qemu-2-instance-00000005.
Dec  1 05:19:02 np0005540825 systemd-udevd[267567]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 05:19:02 np0005540825 NetworkManager[48963]: <info>  [1764584342.9830] device (tap00708a0f-61): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 05:19:02 np0005540825 NetworkManager[48963]: <info>  [1764584342.9857] device (tap00708a0f-61): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 05:19:03 np0005540825 ovn_controller[153404]: 2025-12-01T10:19:03Z|00042|binding|INFO|Setting lport 00708a0f-61a2-499a-8116-e51af4ea857a up in Southbound
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:03.256 163291 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:28:e2 10.100.0.5'], port_security=['fa:16:3e:93:28:e2 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'e1cf90f4-8776-435c-9045-5e998a50cf01', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9724ce57-aa56-45bd-89b7-afc7e1797626', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f6be4e572624210b91193c011607c08', 'neutron:revision_number': '2', 'neutron:security_group_ids': '952c8456-ee1f-4a99-be96-36ad0dc4d1b9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4822df82-76a2-47ee-af9c-1f30abd18b35, chassis=[<ovs.db.idl.Row object at 0x7f3429b436d0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3429b436d0>], logical_port=00708a0f-61a2-499a-8116-e51af4ea857a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:03.259 163291 INFO neutron.agent.ovn.metadata.agent [-] Port 00708a0f-61a2-499a-8116-e51af4ea857a in datapath 9724ce57-aa56-45bd-89b7-afc7e1797626 bound to our chassis#033[00m
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:03.262 163291 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9724ce57-aa56-45bd-89b7-afc7e1797626#033[00m
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:03.274 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[4a4c9b9d-4e4a-4d03-9288-24c3d3323381]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:03.275 163291 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9724ce57-a1 in ovnmeta-9724ce57-aa56-45bd-89b7-afc7e1797626 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:03.277 262668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9724ce57-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:03.277 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[a687937b-a58b-45fd-af4e-f08070ef8b80]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:03.279 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[16547c54-acff-4b0e-9c22-ff41f048b6f4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:03.295 163408 DEBUG oslo.privsep.daemon [-] privsep: reply[1d08f711-26fd-41a6-8f91-2ebb255728d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:03.320 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[849d5727-b505-4919-9143-81a302fb198d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:03.361 262728 DEBUG oslo.privsep.daemon [-] privsep: reply[1a15080c-52cb-4d54-8a07-d25a6d37f297]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:03.370 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[02d6cb04-a9a7-40f3-989c-866ab148162a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:19:03 np0005540825 systemd-udevd[267569]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 05:19:03 np0005540825 NetworkManager[48963]: <info>  [1764584343.3734] manager: (tap9724ce57-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/33)
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:03.412 262728 DEBUG oslo.privsep.daemon [-] privsep: reply[0073c41c-e1ad-470a-bbd4-bc1318242ecc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:03.415 262728 DEBUG oslo.privsep.daemon [-] privsep: reply[e27f4343-12ce-4bd4-885b-d45b8ce85644]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:19:03 np0005540825 NetworkManager[48963]: <info>  [1764584343.4394] device (tap9724ce57-a0): carrier: link connected
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:03.443 262728 DEBUG oslo.privsep.daemon [-] privsep: reply[ef1d0c12-dad4-468e-b044-6ad8c8e331e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:03.465 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[1ea279d3-f562-4fd2-9cdb-04d1b9d87278]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9724ce57-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:90:a5:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 421207, 'reachable_time': 24546, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267600, 'error': None, 'target': 'ovnmeta-9724ce57-aa56-45bd-89b7-afc7e1797626', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:03.489 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[4e1b4183-d1d2-4da4-a9f9-f69246b630f4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe90:a50d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 421207, 'tstamp': 421207}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267601, 'error': None, 'target': 'ovnmeta-9724ce57-aa56-45bd-89b7-afc7e1797626', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:03.514 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[f9378428-715c-4053-89f0-8726d82dcd57]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9724ce57-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:90:a5:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 421207, 'reachable_time': 24546, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 267602, 'error': None, 'target': 'ovnmeta-9724ce57-aa56-45bd-89b7-afc7e1797626', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:03.565 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[1b6bdb58-f1b4-4b8f-a9c4-bcb5d9997c6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:03.657 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[53f4092b-8fb6-4970-9e86-f11e57809575]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:19:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:19:03.659Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:03.660 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9724ce57-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:03.661 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:03.661 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9724ce57-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:19:03 np0005540825 nova_compute[256151]: 2025-12-01 10:19:03.664 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:03 np0005540825 NetworkManager[48963]: <info>  [1764584343.6661] manager: (tap9724ce57-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Dec  1 05:19:03 np0005540825 kernel: tap9724ce57-a0: entered promiscuous mode
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:03.669 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9724ce57-a0, col_values=(('external_ids', {'iface-id': 'd98c2f49-51ea-4026-b448-cbfddf286f5f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:19:03 np0005540825 ovn_controller[153404]: 2025-12-01T10:19:03Z|00043|binding|INFO|Releasing lport d98c2f49-51ea-4026-b448-cbfddf286f5f from this chassis (sb_readonly=1)
Dec  1 05:19:03 np0005540825 nova_compute[256151]: 2025-12-01 10:19:03.671 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:03 np0005540825 nova_compute[256151]: 2025-12-01 10:19:03.701 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:03.702 163291 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9724ce57-aa56-45bd-89b7-afc7e1797626.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9724ce57-aa56-45bd-89b7-afc7e1797626.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:03.703 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[5d0d7e09-dbf7-4acb-a3ce-f1be6d5b38b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:03.704 163291 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]: global
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]:    log         /dev/log local0 debug
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]:    log-tag     haproxy-metadata-proxy-9724ce57-aa56-45bd-89b7-afc7e1797626
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]:    user        root
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]:    group       root
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]:    maxconn     1024
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]:    pidfile     /var/lib/neutron/external/pids/9724ce57-aa56-45bd-89b7-afc7e1797626.pid.haproxy
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]:    daemon
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]: 
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]: defaults
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]:    log global
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]:    mode http
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]:    option httplog
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]:    option dontlognull
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]:    option http-server-close
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]:    option forwardfor
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]:    retries                 3
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]:    timeout http-request    30s
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]:    timeout connect         30s
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]:    timeout client          32s
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]:    timeout server          32s
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]:    timeout http-keep-alive 30s
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]: 
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]: 
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]: listen listener
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]:    bind 169.254.169.254:80
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]:    server metadata /var/lib/neutron/metadata_proxy
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]:    http-request add-header X-OVN-Network-ID 9724ce57-aa56-45bd-89b7-afc7e1797626
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  1 05:19:03 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:03.705 163291 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9724ce57-aa56-45bd-89b7-afc7e1797626', 'env', 'PROCESS_TAG=haproxy-9724ce57-aa56-45bd-89b7-afc7e1797626', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9724ce57-aa56-45bd-89b7-afc7e1797626.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  1 05:19:03 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v853: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  1 05:19:03 np0005540825 nova_compute[256151]: 2025-12-01 10:19:03.834 256155 DEBUG nova.virt.driver [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] Emitting event <LifecycleEvent: 1764584343.833611, e1cf90f4-8776-435c-9045-5e998a50cf01 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 05:19:03 np0005540825 nova_compute[256151]: 2025-12-01 10:19:03.835 256155 INFO nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] VM Started (Lifecycle Event)#033[00m
Dec  1 05:19:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:19:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:19:03.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:19:03 np0005540825 nova_compute[256151]: 2025-12-01 10:19:03.993 256155 DEBUG nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 05:19:03 np0005540825 nova_compute[256151]: 2025-12-01 10:19:03.997 256155 DEBUG nova.virt.driver [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] Emitting event <LifecycleEvent: 1764584343.8337388, e1cf90f4-8776-435c-9045-5e998a50cf01 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 05:19:03 np0005540825 nova_compute[256151]: 2025-12-01 10:19:03.998 256155 INFO nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] VM Paused (Lifecycle Event)#033[00m
Dec  1 05:19:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:19:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:19:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:19:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:04 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:19:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:19:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:19:04.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:19:04 np0005540825 nova_compute[256151]: 2025-12-01 10:19:04.197 256155 DEBUG nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 05:19:04 np0005540825 nova_compute[256151]: 2025-12-01 10:19:04.203 256155 DEBUG nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 05:19:04 np0005540825 podman[267678]: 2025-12-01 10:19:04.222187933 +0000 UTC m=+0.085734975 container create f7554f346d9546b557d13715c71c873a12ff5c8dc9dd46de5109bcee24aa79a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9724ce57-aa56-45bd-89b7-afc7e1797626, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:19:04 np0005540825 podman[267678]: 2025-12-01 10:19:04.182569681 +0000 UTC m=+0.046116763 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 05:19:04 np0005540825 systemd[1]: Started libpod-conmon-f7554f346d9546b557d13715c71c873a12ff5c8dc9dd46de5109bcee24aa79a8.scope.
Dec  1 05:19:04 np0005540825 nova_compute[256151]: 2025-12-01 10:19:04.290 256155 INFO nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 05:19:04 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:19:04 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/368ef1296aa107b23a6f44446d47a8be57c3fe5c6033ec7e0f29cf40ca3f2004/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  1 05:19:04 np0005540825 podman[267678]: 2025-12-01 10:19:04.339800158 +0000 UTC m=+0.203347190 container init f7554f346d9546b557d13715c71c873a12ff5c8dc9dd46de5109bcee24aa79a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9724ce57-aa56-45bd-89b7-afc7e1797626, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 05:19:04 np0005540825 podman[267678]: 2025-12-01 10:19:04.350203049 +0000 UTC m=+0.213750091 container start f7554f346d9546b557d13715c71c873a12ff5c8dc9dd46de5109bcee24aa79a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9724ce57-aa56-45bd-89b7-afc7e1797626, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec  1 05:19:04 np0005540825 neutron-haproxy-ovnmeta-9724ce57-aa56-45bd-89b7-afc7e1797626[267693]: [NOTICE]   (267697) : New worker (267699) forked
Dec  1 05:19:04 np0005540825 neutron-haproxy-ovnmeta-9724ce57-aa56-45bd-89b7-afc7e1797626[267693]: [NOTICE]   (267697) : Loading success.
Dec  1 05:19:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:04.576 163291 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:19:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:04.577 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:19:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:04.578 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:19:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:19:05 np0005540825 nova_compute[256151]: 2025-12-01 10:19:05.736 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:05 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v854: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Dec  1 05:19:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:19:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:19:05.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:19:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:19:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:19:06.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:19:06 np0005540825 nova_compute[256151]: 2025-12-01 10:19:06.166 256155 DEBUG nova.compute.manager [req-4b9037b2-369d-4125-9c87-d234b60a94fd req-d6a14218-b589-4501-8ad2-eccc8da88d29 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Received event network-vif-plugged-00708a0f-61a2-499a-8116-e51af4ea857a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:19:06 np0005540825 nova_compute[256151]: 2025-12-01 10:19:06.167 256155 DEBUG oslo_concurrency.lockutils [req-4b9037b2-369d-4125-9c87-d234b60a94fd req-d6a14218-b589-4501-8ad2-eccc8da88d29 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "e1cf90f4-8776-435c-9045-5e998a50cf01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:19:06 np0005540825 nova_compute[256151]: 2025-12-01 10:19:06.167 256155 DEBUG oslo_concurrency.lockutils [req-4b9037b2-369d-4125-9c87-d234b60a94fd req-d6a14218-b589-4501-8ad2-eccc8da88d29 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "e1cf90f4-8776-435c-9045-5e998a50cf01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:19:06 np0005540825 nova_compute[256151]: 2025-12-01 10:19:06.167 256155 DEBUG oslo_concurrency.lockutils [req-4b9037b2-369d-4125-9c87-d234b60a94fd req-d6a14218-b589-4501-8ad2-eccc8da88d29 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "e1cf90f4-8776-435c-9045-5e998a50cf01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:19:06 np0005540825 nova_compute[256151]: 2025-12-01 10:19:06.168 256155 DEBUG nova.compute.manager [req-4b9037b2-369d-4125-9c87-d234b60a94fd req-d6a14218-b589-4501-8ad2-eccc8da88d29 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Processing event network-vif-plugged-00708a0f-61a2-499a-8116-e51af4ea857a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 05:19:06 np0005540825 nova_compute[256151]: 2025-12-01 10:19:06.169 256155 DEBUG nova.compute.manager [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 05:19:06 np0005540825 nova_compute[256151]: 2025-12-01 10:19:06.174 256155 DEBUG nova.virt.driver [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] Emitting event <LifecycleEvent: 1764584346.174405, e1cf90f4-8776-435c-9045-5e998a50cf01 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 05:19:06 np0005540825 nova_compute[256151]: 2025-12-01 10:19:06.175 256155 INFO nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] VM Resumed (Lifecycle Event)#033[00m
Dec  1 05:19:06 np0005540825 nova_compute[256151]: 2025-12-01 10:19:06.178 256155 DEBUG nova.virt.libvirt.driver [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 05:19:06 np0005540825 nova_compute[256151]: 2025-12-01 10:19:06.182 256155 INFO nova.virt.libvirt.driver [-] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Instance spawned successfully.#033[00m
Dec  1 05:19:06 np0005540825 nova_compute[256151]: 2025-12-01 10:19:06.182 256155 DEBUG nova.virt.libvirt.driver [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 05:19:06 np0005540825 nova_compute[256151]: 2025-12-01 10:19:06.316 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:06 np0005540825 nova_compute[256151]: 2025-12-01 10:19:06.425 256155 DEBUG nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 05:19:06 np0005540825 nova_compute[256151]: 2025-12-01 10:19:06.435 256155 DEBUG nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 05:19:06 np0005540825 nova_compute[256151]: 2025-12-01 10:19:06.440 256155 DEBUG nova.virt.libvirt.driver [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 05:19:06 np0005540825 nova_compute[256151]: 2025-12-01 10:19:06.441 256155 DEBUG nova.virt.libvirt.driver [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 05:19:06 np0005540825 nova_compute[256151]: 2025-12-01 10:19:06.441 256155 DEBUG nova.virt.libvirt.driver [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 05:19:06 np0005540825 nova_compute[256151]: 2025-12-01 10:19:06.442 256155 DEBUG nova.virt.libvirt.driver [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 05:19:06 np0005540825 nova_compute[256151]: 2025-12-01 10:19:06.443 256155 DEBUG nova.virt.libvirt.driver [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 05:19:06 np0005540825 nova_compute[256151]: 2025-12-01 10:19:06.443 256155 DEBUG nova.virt.libvirt.driver [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 05:19:06 np0005540825 nova_compute[256151]: 2025-12-01 10:19:06.513 256155 INFO nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 05:19:06 np0005540825 nova_compute[256151]: 2025-12-01 10:19:06.551 256155 INFO nova.compute.manager [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Took 11.41 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 05:19:06 np0005540825 nova_compute[256151]: 2025-12-01 10:19:06.551 256155 DEBUG nova.compute.manager [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 05:19:06 np0005540825 nova_compute[256151]: 2025-12-01 10:19:06.625 256155 INFO nova.compute.manager [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Took 12.75 seconds to build instance.#033[00m
Dec  1 05:19:06 np0005540825 nova_compute[256151]: 2025-12-01 10:19:06.642 256155 DEBUG oslo_concurrency.lockutils [None req-b670625f-278a-449e-a2c5-72b4816f4aa5 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "e1cf90f4-8776-435c-9045-5e998a50cf01" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.873s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:19:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:19:07.215Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:19:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:19:07.216Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:19:07 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v855: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 MiB/s wr, 26 op/s
Dec  1 05:19:07 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:07 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:19:07 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:19:07.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:19:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:19:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:19:08.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:19:08 np0005540825 nova_compute[256151]: 2025-12-01 10:19:08.286 256155 DEBUG nova.compute.manager [req-2ce87dfb-fdf0-40c3-b8b9-c37706884f80 req-a646c628-d05d-4989-9e7b-b7654ca75ac0 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Received event network-vif-plugged-00708a0f-61a2-499a-8116-e51af4ea857a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:19:08 np0005540825 nova_compute[256151]: 2025-12-01 10:19:08.287 256155 DEBUG oslo_concurrency.lockutils [req-2ce87dfb-fdf0-40c3-b8b9-c37706884f80 req-a646c628-d05d-4989-9e7b-b7654ca75ac0 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "e1cf90f4-8776-435c-9045-5e998a50cf01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:19:08 np0005540825 nova_compute[256151]: 2025-12-01 10:19:08.287 256155 DEBUG oslo_concurrency.lockutils [req-2ce87dfb-fdf0-40c3-b8b9-c37706884f80 req-a646c628-d05d-4989-9e7b-b7654ca75ac0 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "e1cf90f4-8776-435c-9045-5e998a50cf01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:19:08 np0005540825 nova_compute[256151]: 2025-12-01 10:19:08.288 256155 DEBUG oslo_concurrency.lockutils [req-2ce87dfb-fdf0-40c3-b8b9-c37706884f80 req-a646c628-d05d-4989-9e7b-b7654ca75ac0 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "e1cf90f4-8776-435c-9045-5e998a50cf01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:19:08 np0005540825 nova_compute[256151]: 2025-12-01 10:19:08.288 256155 DEBUG nova.compute.manager [req-2ce87dfb-fdf0-40c3-b8b9-c37706884f80 req-a646c628-d05d-4989-9e7b-b7654ca75ac0 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] No waiting events found dispatching network-vif-plugged-00708a0f-61a2-499a-8116-e51af4ea857a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 05:19:08 np0005540825 nova_compute[256151]: 2025-12-01 10:19:08.289 256155 WARNING nova.compute.manager [req-2ce87dfb-fdf0-40c3-b8b9-c37706884f80 req-a646c628-d05d-4989-9e7b-b7654ca75ac0 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Received unexpected event network-vif-plugged-00708a0f-61a2-499a-8116-e51af4ea857a for instance with vm_state active and task_state None.#033[00m
Dec  1 05:19:08 np0005540825 nova_compute[256151]: 2025-12-01 10:19:08.293 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:08 np0005540825 NetworkManager[48963]: <info>  [1764584348.2945] manager: (patch-provnet-da274a4a-a49c-4f01-b728-391696cd2672-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Dec  1 05:19:08 np0005540825 NetworkManager[48963]: <info>  [1764584348.2955] manager: (patch-br-int-to-provnet-da274a4a-a49c-4f01-b728-391696cd2672): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Dec  1 05:19:08 np0005540825 ovn_controller[153404]: 2025-12-01T10:19:08Z|00044|binding|INFO|Releasing lport d98c2f49-51ea-4026-b448-cbfddf286f5f from this chassis (sb_readonly=0)
Dec  1 05:19:08 np0005540825 ovn_controller[153404]: 2025-12-01T10:19:08Z|00045|binding|INFO|Releasing lport d98c2f49-51ea-4026-b448-cbfddf286f5f from this chassis (sb_readonly=0)
Dec  1 05:19:08 np0005540825 nova_compute[256151]: 2025-12-01 10:19:08.327 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:08 np0005540825 nova_compute[256151]: 2025-12-01 10:19:08.332 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:19:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:09 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:19:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:09 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:19:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:09 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:19:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:19:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:19:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:19:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:19:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:19:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:19:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:19:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:19:09 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v856: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.1 MiB/s wr, 25 op/s
Dec  1 05:19:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:19:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:19:09.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:19:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:19:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:19:10.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:19:10 np0005540825 podman[267740]: 2025-12-01 10:19:10.335600383 +0000 UTC m=+0.188639606 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 05:19:10 np0005540825 nova_compute[256151]: 2025-12-01 10:19:10.345 256155 DEBUG nova.compute.manager [req-dd010f4e-2b6b-4b9a-83b6-9507b3982a7d req-f14a546a-a0cd-4ab5-a499-38d6deae3a6d dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Received event network-changed-00708a0f-61a2-499a-8116-e51af4ea857a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:19:10 np0005540825 nova_compute[256151]: 2025-12-01 10:19:10.346 256155 DEBUG nova.compute.manager [req-dd010f4e-2b6b-4b9a-83b6-9507b3982a7d req-f14a546a-a0cd-4ab5-a499-38d6deae3a6d dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Refreshing instance network info cache due to event network-changed-00708a0f-61a2-499a-8116-e51af4ea857a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 05:19:10 np0005540825 nova_compute[256151]: 2025-12-01 10:19:10.346 256155 DEBUG oslo_concurrency.lockutils [req-dd010f4e-2b6b-4b9a-83b6-9507b3982a7d req-f14a546a-a0cd-4ab5-a499-38d6deae3a6d dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "refresh_cache-e1cf90f4-8776-435c-9045-5e998a50cf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 05:19:10 np0005540825 nova_compute[256151]: 2025-12-01 10:19:10.346 256155 DEBUG oslo_concurrency.lockutils [req-dd010f4e-2b6b-4b9a-83b6-9507b3982a7d req-f14a546a-a0cd-4ab5-a499-38d6deae3a6d dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquired lock "refresh_cache-e1cf90f4-8776-435c-9045-5e998a50cf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 05:19:10 np0005540825 nova_compute[256151]: 2025-12-01 10:19:10.346 256155 DEBUG nova.network.neutron [req-dd010f4e-2b6b-4b9a-83b6-9507b3982a7d req-f14a546a-a0cd-4ab5-a499-38d6deae3a6d dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Refreshing network info cache for port 00708a0f-61a2-499a-8116-e51af4ea857a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 05:19:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:19:10 np0005540825 nova_compute[256151]: 2025-12-01 10:19:10.738 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:11 np0005540825 nova_compute[256151]: 2025-12-01 10:19:11.319 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:11 np0005540825 nova_compute[256151]: 2025-12-01 10:19:11.345 256155 DEBUG nova.network.neutron [req-dd010f4e-2b6b-4b9a-83b6-9507b3982a7d req-f14a546a-a0cd-4ab5-a499-38d6deae3a6d dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Updated VIF entry in instance network info cache for port 00708a0f-61a2-499a-8116-e51af4ea857a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 05:19:11 np0005540825 nova_compute[256151]: 2025-12-01 10:19:11.345 256155 DEBUG nova.network.neutron [req-dd010f4e-2b6b-4b9a-83b6-9507b3982a7d req-f14a546a-a0cd-4ab5-a499-38d6deae3a6d dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Updating instance_info_cache with network_info: [{"id": "00708a0f-61a2-499a-8116-e51af4ea857a", "address": "fa:16:3e:93:28:e2", "network": {"id": "9724ce57-aa56-45bd-89b7-afc7e1797626", "bridge": "br-int", "label": "tempest-network-smoke--1740338665", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00708a0f-61", "ovs_interfaceid": "00708a0f-61a2-499a-8116-e51af4ea857a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 05:19:11 np0005540825 nova_compute[256151]: 2025-12-01 10:19:11.363 256155 DEBUG oslo_concurrency.lockutils [req-dd010f4e-2b6b-4b9a-83b6-9507b3982a7d req-f14a546a-a0cd-4ab5-a499-38d6deae3a6d dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Releasing lock "refresh_cache-e1cf90f4-8776-435c-9045-5e998a50cf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 05:19:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:19:11] "GET /metrics HTTP/1.1" 200 48558 "" "Prometheus/2.51.0"
Dec  1 05:19:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:19:11] "GET /metrics HTTP/1.1" 200 48558 "" "Prometheus/2.51.0"
Dec  1 05:19:11 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v857: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 90 op/s
Dec  1 05:19:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:19:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:19:11.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:19:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:19:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:19:12.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:19:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:19:13.660Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:19:13 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v858: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec  1 05:19:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:19:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:19:13.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:19:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:19:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:14 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:19:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:14 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:19:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:14 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:19:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:19:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:19:14.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:19:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:19:15 np0005540825 nova_compute[256151]: 2025-12-01 10:19:15.742 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:15 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v859: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec  1 05:19:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:19:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:19:15.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:19:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:19:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:19:16.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:19:16 np0005540825 nova_compute[256151]: 2025-12-01 10:19:16.321 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:19:17.217Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:19:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:19:17.217Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:19:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:19:17.218Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:19:17 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v860: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.0 KiB/s wr, 71 op/s
Dec  1 05:19:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:19:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:19:17.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:19:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:19:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:19:18.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:19:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:19:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:19 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:19:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:19 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:19:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:19 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:19:19 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v861: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.7 KiB/s wr, 64 op/s
Dec  1 05:19:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:19:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:19:19.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:19:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:19:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:19:20.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:19:20 np0005540825 ovn_controller[153404]: 2025-12-01T10:19:20Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:93:28:e2 10.100.0.5
Dec  1 05:19:20 np0005540825 ovn_controller[153404]: 2025-12-01T10:19:20Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:93:28:e2 10.100.0.5
Dec  1 05:19:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:19:20 np0005540825 nova_compute[256151]: 2025-12-01 10:19:20.745 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:21 np0005540825 nova_compute[256151]: 2025-12-01 10:19:21.324 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:19:21] "GET /metrics HTTP/1.1" 200 48558 "" "Prometheus/2.51.0"
Dec  1 05:19:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:19:21] "GET /metrics HTTP/1.1" 200 48558 "" "Prometheus/2.51.0"
Dec  1 05:19:21 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  1 05:19:21 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:19:21 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  1 05:19:21 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:19:21 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v862: 353 pgs: 353 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Dec  1 05:19:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:19:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:19:21.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:19:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:19:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:19:22.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:19:22 np0005540825 podman[267950]: 2025-12-01 10:19:22.667822628 +0000 UTC m=+0.060005944 container create ee03d64f8266b88e5a7e2920983869e90a773ad3c7d7336c51928e42ccf79f4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_gates, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  1 05:19:22 np0005540825 systemd[1]: Started libpod-conmon-ee03d64f8266b88e5a7e2920983869e90a773ad3c7d7336c51928e42ccf79f4d.scope.
Dec  1 05:19:22 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:19:22 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:19:22 np0005540825 podman[267950]: 2025-12-01 10:19:22.644677215 +0000 UTC m=+0.036860541 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:19:22 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:19:22 np0005540825 podman[267950]: 2025-12-01 10:19:22.759161058 +0000 UTC m=+0.151344404 container init ee03d64f8266b88e5a7e2920983869e90a773ad3c7d7336c51928e42ccf79f4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_gates, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:19:22 np0005540825 podman[267950]: 2025-12-01 10:19:22.770616817 +0000 UTC m=+0.162800123 container start ee03d64f8266b88e5a7e2920983869e90a773ad3c7d7336c51928e42ccf79f4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:19:22 np0005540825 podman[267950]: 2025-12-01 10:19:22.774862997 +0000 UTC m=+0.167046273 container attach ee03d64f8266b88e5a7e2920983869e90a773ad3c7d7336c51928e42ccf79f4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_gates, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:19:22 np0005540825 xenodochial_gates[267966]: 167 167
Dec  1 05:19:22 np0005540825 podman[267950]: 2025-12-01 10:19:22.779411046 +0000 UTC m=+0.171594322 container died ee03d64f8266b88e5a7e2920983869e90a773ad3c7d7336c51928e42ccf79f4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  1 05:19:22 np0005540825 systemd[1]: libpod-ee03d64f8266b88e5a7e2920983869e90a773ad3c7d7336c51928e42ccf79f4d.scope: Deactivated successfully.
Dec  1 05:19:22 np0005540825 systemd[1]: var-lib-containers-storage-overlay-721c5adb7247c0c7e80a28de79c8a0f172dd7514f33fa56b7e3aa6447b5577b9-merged.mount: Deactivated successfully.
Dec  1 05:19:22 np0005540825 podman[267950]: 2025-12-01 10:19:22.823228028 +0000 UTC m=+0.215411314 container remove ee03d64f8266b88e5a7e2920983869e90a773ad3c7d7336c51928e42ccf79f4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_gates, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  1 05:19:22 np0005540825 systemd[1]: libpod-conmon-ee03d64f8266b88e5a7e2920983869e90a773ad3c7d7336c51928e42ccf79f4d.scope: Deactivated successfully.
Dec  1 05:19:23 np0005540825 podman[267990]: 2025-12-01 10:19:23.011225126 +0000 UTC m=+0.041267406 container create 3bdddb643d6697837d3795f5e4f23ddad40318090a9934f9a324d646d2e9068e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_goodall, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:19:23 np0005540825 systemd[1]: Started libpod-conmon-3bdddb643d6697837d3795f5e4f23ddad40318090a9934f9a324d646d2e9068e.scope.
Dec  1 05:19:23 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:19:23 np0005540825 podman[267990]: 2025-12-01 10:19:22.992751945 +0000 UTC m=+0.022794225 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:19:23 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e41e623bea1173107fcbda269d63e06bb79d3866f9c5a67f47ba944c47b501bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:19:23 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e41e623bea1173107fcbda269d63e06bb79d3866f9c5a67f47ba944c47b501bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:19:23 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e41e623bea1173107fcbda269d63e06bb79d3866f9c5a67f47ba944c47b501bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:19:23 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e41e623bea1173107fcbda269d63e06bb79d3866f9c5a67f47ba944c47b501bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:19:23 np0005540825 podman[267990]: 2025-12-01 10:19:23.107517376 +0000 UTC m=+0.137559746 container init 3bdddb643d6697837d3795f5e4f23ddad40318090a9934f9a324d646d2e9068e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:19:23 np0005540825 podman[267990]: 2025-12-01 10:19:23.120812032 +0000 UTC m=+0.150854322 container start 3bdddb643d6697837d3795f5e4f23ddad40318090a9934f9a324d646d2e9068e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_goodall, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:19:23 np0005540825 podman[267990]: 2025-12-01 10:19:23.125145015 +0000 UTC m=+0.155187365 container attach 3bdddb643d6697837d3795f5e4f23ddad40318090a9934f9a324d646d2e9068e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:19:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:19:23.661Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:19:23 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  1 05:19:23 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:19:23 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  1 05:19:23 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:19:23 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v863: 353 pgs: 353 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec  1 05:19:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:19:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:19:23.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:19:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:19:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:19:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:19:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:24 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:19:24 np0005540825 relaxed_goodall[268006]: [
Dec  1 05:19:24 np0005540825 relaxed_goodall[268006]:    {
Dec  1 05:19:24 np0005540825 relaxed_goodall[268006]:        "available": false,
Dec  1 05:19:24 np0005540825 relaxed_goodall[268006]:        "being_replaced": false,
Dec  1 05:19:24 np0005540825 relaxed_goodall[268006]:        "ceph_device_lvm": false,
Dec  1 05:19:24 np0005540825 relaxed_goodall[268006]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec  1 05:19:24 np0005540825 relaxed_goodall[268006]:        "lsm_data": {},
Dec  1 05:19:24 np0005540825 relaxed_goodall[268006]:        "lvs": [],
Dec  1 05:19:24 np0005540825 relaxed_goodall[268006]:        "path": "/dev/sr0",
Dec  1 05:19:24 np0005540825 relaxed_goodall[268006]:        "rejected_reasons": [
Dec  1 05:19:24 np0005540825 relaxed_goodall[268006]:            "Has a FileSystem",
Dec  1 05:19:24 np0005540825 relaxed_goodall[268006]:            "Insufficient space (<5GB)"
Dec  1 05:19:24 np0005540825 relaxed_goodall[268006]:        ],
Dec  1 05:19:24 np0005540825 relaxed_goodall[268006]:        "sys_api": {
Dec  1 05:19:24 np0005540825 relaxed_goodall[268006]:            "actuators": null,
Dec  1 05:19:24 np0005540825 relaxed_goodall[268006]:            "device_nodes": [
Dec  1 05:19:24 np0005540825 relaxed_goodall[268006]:                "sr0"
Dec  1 05:19:24 np0005540825 relaxed_goodall[268006]:            ],
Dec  1 05:19:24 np0005540825 relaxed_goodall[268006]:            "devname": "sr0",
Dec  1 05:19:24 np0005540825 relaxed_goodall[268006]:            "human_readable_size": "482.00 KB",
Dec  1 05:19:24 np0005540825 relaxed_goodall[268006]:            "id_bus": "ata",
Dec  1 05:19:24 np0005540825 relaxed_goodall[268006]:            "model": "QEMU DVD-ROM",
Dec  1 05:19:24 np0005540825 relaxed_goodall[268006]:            "nr_requests": "2",
Dec  1 05:19:24 np0005540825 relaxed_goodall[268006]:            "parent": "/dev/sr0",
Dec  1 05:19:24 np0005540825 relaxed_goodall[268006]:            "partitions": {},
Dec  1 05:19:24 np0005540825 relaxed_goodall[268006]:            "path": "/dev/sr0",
Dec  1 05:19:24 np0005540825 relaxed_goodall[268006]:            "removable": "1",
Dec  1 05:19:24 np0005540825 relaxed_goodall[268006]:            "rev": "2.5+",
Dec  1 05:19:24 np0005540825 relaxed_goodall[268006]:            "ro": "0",
Dec  1 05:19:24 np0005540825 relaxed_goodall[268006]:            "rotational": "1",
Dec  1 05:19:24 np0005540825 relaxed_goodall[268006]:            "sas_address": "",
Dec  1 05:19:24 np0005540825 relaxed_goodall[268006]:            "sas_device_handle": "",
Dec  1 05:19:24 np0005540825 relaxed_goodall[268006]:            "scheduler_mode": "mq-deadline",
Dec  1 05:19:24 np0005540825 relaxed_goodall[268006]:            "sectors": 0,
Dec  1 05:19:24 np0005540825 relaxed_goodall[268006]:            "sectorsize": "2048",
Dec  1 05:19:24 np0005540825 relaxed_goodall[268006]:            "size": 493568.0,
Dec  1 05:19:24 np0005540825 relaxed_goodall[268006]:            "support_discard": "2048",
Dec  1 05:19:24 np0005540825 relaxed_goodall[268006]:            "type": "disk",
Dec  1 05:19:24 np0005540825 relaxed_goodall[268006]:            "vendor": "QEMU"
Dec  1 05:19:24 np0005540825 relaxed_goodall[268006]:        }
Dec  1 05:19:24 np0005540825 relaxed_goodall[268006]:    }
Dec  1 05:19:24 np0005540825 relaxed_goodall[268006]: ]
Dec  1 05:19:24 np0005540825 systemd[1]: libpod-3bdddb643d6697837d3795f5e4f23ddad40318090a9934f9a324d646d2e9068e.scope: Deactivated successfully.
Dec  1 05:19:24 np0005540825 podman[267990]: 2025-12-01 10:19:24.079825301 +0000 UTC m=+1.109867571 container died 3bdddb643d6697837d3795f5e4f23ddad40318090a9934f9a324d646d2e9068e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True)
Dec  1 05:19:24 np0005540825 systemd[1]: var-lib-containers-storage-overlay-e41e623bea1173107fcbda269d63e06bb79d3866f9c5a67f47ba944c47b501bd-merged.mount: Deactivated successfully.
Dec  1 05:19:24 np0005540825 podman[267990]: 2025-12-01 10:19:24.122786261 +0000 UTC m=+1.152828521 container remove 3bdddb643d6697837d3795f5e4f23ddad40318090a9934f9a324d646d2e9068e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_goodall, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:19:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:19:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:19:24.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:19:24 np0005540825 systemd[1]: libpod-conmon-3bdddb643d6697837d3795f5e4f23ddad40318090a9934f9a324d646d2e9068e.scope: Deactivated successfully.
Dec  1 05:19:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 05:19:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:19:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 05:19:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  1 05:19:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:19:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:19:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  1 05:19:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:19:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:19:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:19:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:19:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:19:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 05:19:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:19:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 05:19:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:19:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 05:19:25 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:19:25 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:19:25 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:19:25 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:19:25 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:19:25 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:19:25 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:19:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:19:25 np0005540825 nova_compute[256151]: 2025-12-01 10:19:25.433 256155 INFO nova.compute.manager [None req-8fb62281-5d4d-4abd-896b-61ada90ead6f 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Get console output#033[00m
Dec  1 05:19:25 np0005540825 nova_compute[256151]: 2025-12-01 10:19:25.442 262942 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  1 05:19:25 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:19:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 05:19:25 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 05:19:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 05:19:25 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:19:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:19:25 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:19:25 np0005540825 nova_compute[256151]: 2025-12-01 10:19:25.750 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:25 np0005540825 nova_compute[256151]: 2025-12-01 10:19:25.762 256155 DEBUG oslo_concurrency.lockutils [None req-65fc2036-d821-4f48-b11d-c91929576125 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "e1cf90f4-8776-435c-9045-5e998a50cf01" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:19:25 np0005540825 nova_compute[256151]: 2025-12-01 10:19:25.763 256155 DEBUG oslo_concurrency.lockutils [None req-65fc2036-d821-4f48-b11d-c91929576125 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "e1cf90f4-8776-435c-9045-5e998a50cf01" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:19:25 np0005540825 nova_compute[256151]: 2025-12-01 10:19:25.763 256155 DEBUG oslo_concurrency.lockutils [None req-65fc2036-d821-4f48-b11d-c91929576125 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "e1cf90f4-8776-435c-9045-5e998a50cf01-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:19:25 np0005540825 nova_compute[256151]: 2025-12-01 10:19:25.764 256155 DEBUG oslo_concurrency.lockutils [None req-65fc2036-d821-4f48-b11d-c91929576125 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "e1cf90f4-8776-435c-9045-5e998a50cf01-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:19:25 np0005540825 nova_compute[256151]: 2025-12-01 10:19:25.764 256155 DEBUG oslo_concurrency.lockutils [None req-65fc2036-d821-4f48-b11d-c91929576125 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "e1cf90f4-8776-435c-9045-5e998a50cf01-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:19:25 np0005540825 nova_compute[256151]: 2025-12-01 10:19:25.766 256155 INFO nova.compute.manager [None req-65fc2036-d821-4f48-b11d-c91929576125 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Terminating instance#033[00m
Dec  1 05:19:25 np0005540825 nova_compute[256151]: 2025-12-01 10:19:25.768 256155 DEBUG nova.compute.manager [None req-65fc2036-d821-4f48-b11d-c91929576125 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 05:19:25 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v864: 353 pgs: 353 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec  1 05:19:25 np0005540825 podman[269305]: 2025-12-01 10:19:25.872550125 +0000 UTC m=+0.075111998 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 05:19:25 np0005540825 kernel: tap00708a0f-61 (unregistering): left promiscuous mode
Dec  1 05:19:25 np0005540825 NetworkManager[48963]: <info>  [1764584365.9428] device (tap00708a0f-61): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 05:19:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:19:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:19:25.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:19:25 np0005540825 nova_compute[256151]: 2025-12-01 10:19:25.972 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:25 np0005540825 ovn_controller[153404]: 2025-12-01T10:19:25Z|00046|binding|INFO|Releasing lport 00708a0f-61a2-499a-8116-e51af4ea857a from this chassis (sb_readonly=0)
Dec  1 05:19:25 np0005540825 ovn_controller[153404]: 2025-12-01T10:19:25Z|00047|binding|INFO|Setting lport 00708a0f-61a2-499a-8116-e51af4ea857a down in Southbound
Dec  1 05:19:25 np0005540825 ovn_controller[153404]: 2025-12-01T10:19:25Z|00048|binding|INFO|Removing iface tap00708a0f-61 ovn-installed in OVS
Dec  1 05:19:25 np0005540825 nova_compute[256151]: 2025-12-01 10:19:25.974 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:25 np0005540825 nova_compute[256151]: 2025-12-01 10:19:25.998 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:26 np0005540825 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000005.scope: Deactivated successfully.
Dec  1 05:19:26 np0005540825 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000005.scope: Consumed 14.284s CPU time.
Dec  1 05:19:26 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:26.017 163291 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:28:e2 10.100.0.5'], port_security=['fa:16:3e:93:28:e2 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'e1cf90f4-8776-435c-9045-5e998a50cf01', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9724ce57-aa56-45bd-89b7-afc7e1797626', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f6be4e572624210b91193c011607c08', 'neutron:revision_number': '4', 'neutron:security_group_ids': '952c8456-ee1f-4a99-be96-36ad0dc4d1b9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.225'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4822df82-76a2-47ee-af9c-1f30abd18b35, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3429b436d0>], logical_port=00708a0f-61a2-499a-8116-e51af4ea857a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3429b436d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 05:19:26 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:26.019 163291 INFO neutron.agent.ovn.metadata.agent [-] Port 00708a0f-61a2-499a-8116-e51af4ea857a in datapath 9724ce57-aa56-45bd-89b7-afc7e1797626 unbound from our chassis#033[00m
Dec  1 05:19:26 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:26.021 163291 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9724ce57-aa56-45bd-89b7-afc7e1797626, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  1 05:19:26 np0005540825 systemd-machined[216307]: Machine qemu-2-instance-00000005 terminated.
Dec  1 05:19:26 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:26.022 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[d7f91293-9d07-4574-aef1-ca26ddf1463b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:19:26 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:26.023 163291 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9724ce57-aa56-45bd-89b7-afc7e1797626 namespace which is not needed anymore#033[00m
Dec  1 05:19:26 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:19:26 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:19:26 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:19:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:19:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:19:26.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:19:26 np0005540825 nova_compute[256151]: 2025-12-01 10:19:26.215 256155 INFO nova.virt.libvirt.driver [-] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Instance destroyed successfully.#033[00m
Dec  1 05:19:26 np0005540825 nova_compute[256151]: 2025-12-01 10:19:26.218 256155 DEBUG nova.objects.instance [None req-65fc2036-d821-4f48-b11d-c91929576125 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lazy-loading 'resources' on Instance uuid e1cf90f4-8776-435c-9045-5e998a50cf01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 05:19:26 np0005540825 neutron-haproxy-ovnmeta-9724ce57-aa56-45bd-89b7-afc7e1797626[267693]: [NOTICE]   (267697) : haproxy version is 2.8.14-c23fe91
Dec  1 05:19:26 np0005540825 neutron-haproxy-ovnmeta-9724ce57-aa56-45bd-89b7-afc7e1797626[267693]: [NOTICE]   (267697) : path to executable is /usr/sbin/haproxy
Dec  1 05:19:26 np0005540825 neutron-haproxy-ovnmeta-9724ce57-aa56-45bd-89b7-afc7e1797626[267693]: [WARNING]  (267697) : Exiting Master process...
Dec  1 05:19:26 np0005540825 neutron-haproxy-ovnmeta-9724ce57-aa56-45bd-89b7-afc7e1797626[267693]: [ALERT]    (267697) : Current worker (267699) exited with code 143 (Terminated)
Dec  1 05:19:26 np0005540825 neutron-haproxy-ovnmeta-9724ce57-aa56-45bd-89b7-afc7e1797626[267693]: [WARNING]  (267697) : All workers exited. Exiting... (0)
Dec  1 05:19:26 np0005540825 systemd[1]: libpod-f7554f346d9546b557d13715c71c873a12ff5c8dc9dd46de5109bcee24aa79a8.scope: Deactivated successfully.
Dec  1 05:19:26 np0005540825 podman[269386]: 2025-12-01 10:19:26.247812003 +0000 UTC m=+0.078034305 container died f7554f346d9546b557d13715c71c873a12ff5c8dc9dd46de5109bcee24aa79a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9724ce57-aa56-45bd-89b7-afc7e1797626, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  1 05:19:26 np0005540825 nova_compute[256151]: 2025-12-01 10:19:26.257 256155 DEBUG nova.virt.libvirt.vif [None req-65fc2036-d821-4f48-b11d-c91929576125 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T10:18:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1833698453',display_name='tempest-TestNetworkBasicOps-server-1833698453',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1833698453',id=5,image_ref='8f75d6de-6ce0-44e1-b417-d0111424475b',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIErBHbuun640R6FmJNfZg22QQshsQmGa3yao92C0pcKaeLgkSpmZBu4bAVuunPnHp7ytS52KtYD3dQ4T4GtdUYHibJX4j/vFO5DWMIMfeEb0tEyzSM2o0ebzGN+Oiw67A==',key_name='tempest-TestNetworkBasicOps-18502438',keypairs=<?>,launch_index=0,launched_at=2025-12-01T10:19:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9f6be4e572624210b91193c011607c08',ramdisk_id='',reservation_id='r-n4a3j5m5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8f75d6de-6ce0-44e1-b417-d0111424475b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1248115384',owner_user_name='tempest-TestNetworkBasicOps-1248115384-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T10:19:06Z,user_data=None,user_id='5b56a238daf0445798410e51caada0ff',uuid=e1cf90f4-8776-435c-9045-5e998a50cf01,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "00708a0f-61a2-499a-8116-e51af4ea857a", "address": "fa:16:3e:93:28:e2", "network": {"id": "9724ce57-aa56-45bd-89b7-afc7e1797626", "bridge": "br-int", "label": "tempest-network-smoke--1740338665", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00708a0f-61", "ovs_interfaceid": "00708a0f-61a2-499a-8116-e51af4ea857a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 05:19:26 np0005540825 nova_compute[256151]: 2025-12-01 10:19:26.258 256155 DEBUG nova.network.os_vif_util [None req-65fc2036-d821-4f48-b11d-c91929576125 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Converting VIF {"id": "00708a0f-61a2-499a-8116-e51af4ea857a", "address": "fa:16:3e:93:28:e2", "network": {"id": "9724ce57-aa56-45bd-89b7-afc7e1797626", "bridge": "br-int", "label": "tempest-network-smoke--1740338665", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00708a0f-61", "ovs_interfaceid": "00708a0f-61a2-499a-8116-e51af4ea857a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 05:19:26 np0005540825 nova_compute[256151]: 2025-12-01 10:19:26.259 256155 DEBUG nova.network.os_vif_util [None req-65fc2036-d821-4f48-b11d-c91929576125 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:93:28:e2,bridge_name='br-int',has_traffic_filtering=True,id=00708a0f-61a2-499a-8116-e51af4ea857a,network=Network(9724ce57-aa56-45bd-89b7-afc7e1797626),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00708a0f-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 05:19:26 np0005540825 nova_compute[256151]: 2025-12-01 10:19:26.260 256155 DEBUG os_vif [None req-65fc2036-d821-4f48-b11d-c91929576125 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:93:28:e2,bridge_name='br-int',has_traffic_filtering=True,id=00708a0f-61a2-499a-8116-e51af4ea857a,network=Network(9724ce57-aa56-45bd-89b7-afc7e1797626),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00708a0f-61') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 05:19:26 np0005540825 nova_compute[256151]: 2025-12-01 10:19:26.262 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:26 np0005540825 nova_compute[256151]: 2025-12-01 10:19:26.262 256155 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00708a0f-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:19:26 np0005540825 nova_compute[256151]: 2025-12-01 10:19:26.264 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:26 np0005540825 nova_compute[256151]: 2025-12-01 10:19:26.266 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 05:19:26 np0005540825 nova_compute[256151]: 2025-12-01 10:19:26.269 256155 INFO os_vif [None req-65fc2036-d821-4f48-b11d-c91929576125 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:93:28:e2,bridge_name='br-int',has_traffic_filtering=True,id=00708a0f-61a2-499a-8116-e51af4ea857a,network=Network(9724ce57-aa56-45bd-89b7-afc7e1797626),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00708a0f-61')#033[00m
Dec  1 05:19:26 np0005540825 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f7554f346d9546b557d13715c71c873a12ff5c8dc9dd46de5109bcee24aa79a8-userdata-shm.mount: Deactivated successfully.
Dec  1 05:19:26 np0005540825 systemd[1]: var-lib-containers-storage-overlay-368ef1296aa107b23a6f44446d47a8be57c3fe5c6033ec7e0f29cf40ca3f2004-merged.mount: Deactivated successfully.
Dec  1 05:19:26 np0005540825 podman[269386]: 2025-12-01 10:19:26.29912903 +0000 UTC m=+0.129351332 container cleanup f7554f346d9546b557d13715c71c873a12ff5c8dc9dd46de5109bcee24aa79a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9724ce57-aa56-45bd-89b7-afc7e1797626, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 05:19:26 np0005540825 systemd[1]: libpod-conmon-f7554f346d9546b557d13715c71c873a12ff5c8dc9dd46de5109bcee24aa79a8.scope: Deactivated successfully.
Dec  1 05:19:26 np0005540825 podman[269461]: 2025-12-01 10:19:26.381787414 +0000 UTC m=+0.055160389 container remove f7554f346d9546b557d13715c71c873a12ff5c8dc9dd46de5109bcee24aa79a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9724ce57-aa56-45bd-89b7-afc7e1797626, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0)
Dec  1 05:19:26 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:26.388 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[648b7ba0-6d41-40b5-a3fa-bdc4262f0bfd]: (4, ('Mon Dec  1 10:19:26 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9724ce57-aa56-45bd-89b7-afc7e1797626 (f7554f346d9546b557d13715c71c873a12ff5c8dc9dd46de5109bcee24aa79a8)\nf7554f346d9546b557d13715c71c873a12ff5c8dc9dd46de5109bcee24aa79a8\nMon Dec  1 10:19:26 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9724ce57-aa56-45bd-89b7-afc7e1797626 (f7554f346d9546b557d13715c71c873a12ff5c8dc9dd46de5109bcee24aa79a8)\nf7554f346d9546b557d13715c71c873a12ff5c8dc9dd46de5109bcee24aa79a8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:19:26 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:26.390 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[1f2eae52-3a95-46b7-92d0-8656acd0ee47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:19:26 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:26.394 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9724ce57-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:19:26 np0005540825 nova_compute[256151]: 2025-12-01 10:19:26.397 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:26 np0005540825 kernel: tap9724ce57-a0: left promiscuous mode
Dec  1 05:19:26 np0005540825 nova_compute[256151]: 2025-12-01 10:19:26.415 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:26 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:26.418 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[8748d43a-14d3-434b-88a7-e16362a718de]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:19:26 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:26.435 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[7db573c7-fd27-425f-b1a0-c25c51c8c8a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:19:26 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:26.436 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[a60d14a9-4640-4614-b122-1630f2f6d3ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:19:26 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:26.452 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[b0bbc7eb-6f32-4612-aa87-dc93c9710d25]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 421198, 'reachable_time': 19403, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269493, 'error': None, 'target': 'ovnmeta-9724ce57-aa56-45bd-89b7-afc7e1797626', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:19:26 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:26.455 163408 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9724ce57-aa56-45bd-89b7-afc7e1797626 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  1 05:19:26 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:19:26.455 163408 DEBUG oslo.privsep.daemon [-] privsep: reply[42dbe231-5449-4de7-b464-c23b3041c378]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:19:26 np0005540825 systemd[1]: run-netns-ovnmeta\x2d9724ce57\x2daa56\x2d45bd\x2d89b7\x2dafc7e1797626.mount: Deactivated successfully.
Dec  1 05:19:26 np0005540825 podman[269481]: 2025-12-01 10:19:26.482466277 +0000 UTC m=+0.056414641 container create 48da34e4ec71f1b88784d2075fdb7c47a42a584742f8b7926c9646d5281b4928 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_nash, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:19:26 np0005540825 systemd[1]: Started libpod-conmon-48da34e4ec71f1b88784d2075fdb7c47a42a584742f8b7926c9646d5281b4928.scope.
Dec  1 05:19:26 np0005540825 podman[269481]: 2025-12-01 10:19:26.460281299 +0000 UTC m=+0.034229703 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:19:26 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:19:26 np0005540825 podman[269481]: 2025-12-01 10:19:26.5892557 +0000 UTC m=+0.163204144 container init 48da34e4ec71f1b88784d2075fdb7c47a42a584742f8b7926c9646d5281b4928 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_nash, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:19:26 np0005540825 podman[269481]: 2025-12-01 10:19:26.60117311 +0000 UTC m=+0.175121504 container start 48da34e4ec71f1b88784d2075fdb7c47a42a584742f8b7926c9646d5281b4928 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_nash, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  1 05:19:26 np0005540825 podman[269481]: 2025-12-01 10:19:26.605259277 +0000 UTC m=+0.179207671 container attach 48da34e4ec71f1b88784d2075fdb7c47a42a584742f8b7926c9646d5281b4928 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_nash, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  1 05:19:26 np0005540825 systemd[1]: libpod-48da34e4ec71f1b88784d2075fdb7c47a42a584742f8b7926c9646d5281b4928.scope: Deactivated successfully.
Dec  1 05:19:26 np0005540825 unruffled_nash[269500]: 167 167
Dec  1 05:19:26 np0005540825 conmon[269500]: conmon 48da34e4ec71f1b88784 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-48da34e4ec71f1b88784d2075fdb7c47a42a584742f8b7926c9646d5281b4928.scope/container/memory.events
Dec  1 05:19:26 np0005540825 podman[269481]: 2025-12-01 10:19:26.611703385 +0000 UTC m=+0.185651779 container died 48da34e4ec71f1b88784d2075fdb7c47a42a584742f8b7926c9646d5281b4928 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  1 05:19:26 np0005540825 systemd[1]: var-lib-containers-storage-overlay-9a6f0fe74061741b7a7a3f06788632e56ab4142e9ea2fbbbefdd32d67f4af9b1-merged.mount: Deactivated successfully.
Dec  1 05:19:26 np0005540825 podman[269481]: 2025-12-01 10:19:26.664010657 +0000 UTC m=+0.237959041 container remove 48da34e4ec71f1b88784d2075fdb7c47a42a584742f8b7926c9646d5281b4928 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_nash, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:19:26 np0005540825 systemd[1]: libpod-conmon-48da34e4ec71f1b88784d2075fdb7c47a42a584742f8b7926c9646d5281b4928.scope: Deactivated successfully.
Dec  1 05:19:26 np0005540825 nova_compute[256151]: 2025-12-01 10:19:26.923 256155 DEBUG nova.compute.manager [req-58892195-2b62-4041-a414-c6eee0512e09 req-ca2a5178-5df2-4471-8504-8f350ea27e7e dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Received event network-vif-unplugged-00708a0f-61a2-499a-8116-e51af4ea857a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:19:26 np0005540825 nova_compute[256151]: 2025-12-01 10:19:26.925 256155 DEBUG oslo_concurrency.lockutils [req-58892195-2b62-4041-a414-c6eee0512e09 req-ca2a5178-5df2-4471-8504-8f350ea27e7e dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "e1cf90f4-8776-435c-9045-5e998a50cf01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:19:26 np0005540825 nova_compute[256151]: 2025-12-01 10:19:26.926 256155 DEBUG oslo_concurrency.lockutils [req-58892195-2b62-4041-a414-c6eee0512e09 req-ca2a5178-5df2-4471-8504-8f350ea27e7e dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "e1cf90f4-8776-435c-9045-5e998a50cf01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:19:26 np0005540825 nova_compute[256151]: 2025-12-01 10:19:26.926 256155 DEBUG oslo_concurrency.lockutils [req-58892195-2b62-4041-a414-c6eee0512e09 req-ca2a5178-5df2-4471-8504-8f350ea27e7e dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "e1cf90f4-8776-435c-9045-5e998a50cf01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:19:26 np0005540825 nova_compute[256151]: 2025-12-01 10:19:26.927 256155 DEBUG nova.compute.manager [req-58892195-2b62-4041-a414-c6eee0512e09 req-ca2a5178-5df2-4471-8504-8f350ea27e7e dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] No waiting events found dispatching network-vif-unplugged-00708a0f-61a2-499a-8116-e51af4ea857a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 05:19:26 np0005540825 nova_compute[256151]: 2025-12-01 10:19:26.927 256155 DEBUG nova.compute.manager [req-58892195-2b62-4041-a414-c6eee0512e09 req-ca2a5178-5df2-4471-8504-8f350ea27e7e dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Received event network-vif-unplugged-00708a0f-61a2-499a-8116-e51af4ea857a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  1 05:19:26 np0005540825 podman[269523]: 2025-12-01 10:19:26.949471186 +0000 UTC m=+0.072772427 container create 2d567b49826c20ba1a3a013023509ffe209874795e5b959b0fa7ceeb8dcf6f26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_noether, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:19:27 np0005540825 systemd[1]: Started libpod-conmon-2d567b49826c20ba1a3a013023509ffe209874795e5b959b0fa7ceeb8dcf6f26.scope.
Dec  1 05:19:27 np0005540825 podman[269523]: 2025-12-01 10:19:26.91814788 +0000 UTC m=+0.041449171 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:19:27 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:19:27 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03b86fd22e22dca9ff3142a83e4939142f1d0f5c19887dd9d10e5d9ece9cfef0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:19:27 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03b86fd22e22dca9ff3142a83e4939142f1d0f5c19887dd9d10e5d9ece9cfef0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:19:27 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03b86fd22e22dca9ff3142a83e4939142f1d0f5c19887dd9d10e5d9ece9cfef0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:19:27 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03b86fd22e22dca9ff3142a83e4939142f1d0f5c19887dd9d10e5d9ece9cfef0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:19:27 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03b86fd22e22dca9ff3142a83e4939142f1d0f5c19887dd9d10e5d9ece9cfef0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:19:27 np0005540825 podman[269523]: 2025-12-01 10:19:27.068834066 +0000 UTC m=+0.192135327 container init 2d567b49826c20ba1a3a013023509ffe209874795e5b959b0fa7ceeb8dcf6f26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:19:27 np0005540825 podman[269523]: 2025-12-01 10:19:27.08087422 +0000 UTC m=+0.204175471 container start 2d567b49826c20ba1a3a013023509ffe209874795e5b959b0fa7ceeb8dcf6f26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:19:27 np0005540825 podman[269523]: 2025-12-01 10:19:27.085689656 +0000 UTC m=+0.208990907 container attach 2d567b49826c20ba1a3a013023509ffe209874795e5b959b0fa7ceeb8dcf6f26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_noether, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:19:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:19:27.219Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:19:27 np0005540825 nova_compute[256151]: 2025-12-01 10:19:27.445 256155 INFO nova.virt.libvirt.driver [None req-65fc2036-d821-4f48-b11d-c91929576125 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Deleting instance files /var/lib/nova/instances/e1cf90f4-8776-435c-9045-5e998a50cf01_del#033[00m
Dec  1 05:19:27 np0005540825 nova_compute[256151]: 2025-12-01 10:19:27.446 256155 INFO nova.virt.libvirt.driver [None req-65fc2036-d821-4f48-b11d-c91929576125 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Deletion of /var/lib/nova/instances/e1cf90f4-8776-435c-9045-5e998a50cf01_del complete#033[00m
Dec  1 05:19:27 np0005540825 nova_compute[256151]: 2025-12-01 10:19:27.519 256155 INFO nova.compute.manager [None req-65fc2036-d821-4f48-b11d-c91929576125 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Took 1.75 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 05:19:27 np0005540825 nova_compute[256151]: 2025-12-01 10:19:27.520 256155 DEBUG oslo.service.loopingcall [None req-65fc2036-d821-4f48-b11d-c91929576125 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 05:19:27 np0005540825 nova_compute[256151]: 2025-12-01 10:19:27.521 256155 DEBUG nova.compute.manager [-] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 05:19:27 np0005540825 nova_compute[256151]: 2025-12-01 10:19:27.522 256155 DEBUG nova.network.neutron [-] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 05:19:27 np0005540825 ecstatic_noether[269539]: --> passed data devices: 0 physical, 1 LVM
Dec  1 05:19:27 np0005540825 ecstatic_noether[269539]: --> All data devices are unavailable
Dec  1 05:19:27 np0005540825 systemd[1]: libpod-2d567b49826c20ba1a3a013023509ffe209874795e5b959b0fa7ceeb8dcf6f26.scope: Deactivated successfully.
Dec  1 05:19:27 np0005540825 podman[269523]: 2025-12-01 10:19:27.597752759 +0000 UTC m=+0.721054040 container died 2d567b49826c20ba1a3a013023509ffe209874795e5b959b0fa7ceeb8dcf6f26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_noether, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:19:27 np0005540825 systemd[1]: var-lib-containers-storage-overlay-03b86fd22e22dca9ff3142a83e4939142f1d0f5c19887dd9d10e5d9ece9cfef0-merged.mount: Deactivated successfully.
Dec  1 05:19:27 np0005540825 podman[269523]: 2025-12-01 10:19:27.657747582 +0000 UTC m=+0.781048793 container remove 2d567b49826c20ba1a3a013023509ffe209874795e5b959b0fa7ceeb8dcf6f26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  1 05:19:27 np0005540825 systemd[1]: libpod-conmon-2d567b49826c20ba1a3a013023509ffe209874795e5b959b0fa7ceeb8dcf6f26.scope: Deactivated successfully.
Dec  1 05:19:27 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v865: 353 pgs: 353 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec  1 05:19:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:19:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:19:27.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:19:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:19:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:19:28.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:19:28 np0005540825 podman[269661]: 2025-12-01 10:19:28.390737022 +0000 UTC m=+0.053659599 container create 22c8f0888fc5b745b1f3e069eb5ddf0803575d96eb7ccc0be1d6af247c3207f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_euclid, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec  1 05:19:28 np0005540825 systemd[1]: Started libpod-conmon-22c8f0888fc5b745b1f3e069eb5ddf0803575d96eb7ccc0be1d6af247c3207f2.scope.
Dec  1 05:19:28 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:19:28 np0005540825 podman[269661]: 2025-12-01 10:19:28.364860038 +0000 UTC m=+0.027782625 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:19:28 np0005540825 podman[269661]: 2025-12-01 10:19:28.477353199 +0000 UTC m=+0.140275806 container init 22c8f0888fc5b745b1f3e069eb5ddf0803575d96eb7ccc0be1d6af247c3207f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:19:28 np0005540825 podman[269661]: 2025-12-01 10:19:28.487933165 +0000 UTC m=+0.150855742 container start 22c8f0888fc5b745b1f3e069eb5ddf0803575d96eb7ccc0be1d6af247c3207f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  1 05:19:28 np0005540825 podman[269661]: 2025-12-01 10:19:28.491489077 +0000 UTC m=+0.154411664 container attach 22c8f0888fc5b745b1f3e069eb5ddf0803575d96eb7ccc0be1d6af247c3207f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_euclid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec  1 05:19:28 np0005540825 practical_euclid[269677]: 167 167
Dec  1 05:19:28 np0005540825 systemd[1]: libpod-22c8f0888fc5b745b1f3e069eb5ddf0803575d96eb7ccc0be1d6af247c3207f2.scope: Deactivated successfully.
Dec  1 05:19:28 np0005540825 conmon[269677]: conmon 22c8f0888fc5b745b1f3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-22c8f0888fc5b745b1f3e069eb5ddf0803575d96eb7ccc0be1d6af247c3207f2.scope/container/memory.events
Dec  1 05:19:28 np0005540825 podman[269661]: 2025-12-01 10:19:28.497153925 +0000 UTC m=+0.160076512 container died 22c8f0888fc5b745b1f3e069eb5ddf0803575d96eb7ccc0be1d6af247c3207f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:19:28 np0005540825 systemd[1]: var-lib-containers-storage-overlay-9b775c829a2fa38849620c8579fdf20f0b91a32762e77ddc2bc045739643ec89-merged.mount: Deactivated successfully.
Dec  1 05:19:28 np0005540825 podman[269661]: 2025-12-01 10:19:28.549247113 +0000 UTC m=+0.212169690 container remove 22c8f0888fc5b745b1f3e069eb5ddf0803575d96eb7ccc0be1d6af247c3207f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_euclid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  1 05:19:28 np0005540825 systemd[1]: libpod-conmon-22c8f0888fc5b745b1f3e069eb5ddf0803575d96eb7ccc0be1d6af247c3207f2.scope: Deactivated successfully.
Dec  1 05:19:28 np0005540825 nova_compute[256151]: 2025-12-01 10:19:28.760 256155 DEBUG nova.network.neutron [-] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 05:19:28 np0005540825 podman[269700]: 2025-12-01 10:19:28.7690639 +0000 UTC m=+0.063503215 container create f1891de11916cf17d56778bd04ad25bcca2e4c6299004eac158e7b8e84d3ce5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_agnesi, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:19:28 np0005540825 nova_compute[256151]: 2025-12-01 10:19:28.787 256155 INFO nova.compute.manager [-] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Took 1.27 seconds to deallocate network for instance.#033[00m
Dec  1 05:19:28 np0005540825 systemd[1]: Started libpod-conmon-f1891de11916cf17d56778bd04ad25bcca2e4c6299004eac158e7b8e84d3ce5b.scope.
Dec  1 05:19:28 np0005540825 nova_compute[256151]: 2025-12-01 10:19:28.829 256155 DEBUG oslo_concurrency.lockutils [None req-65fc2036-d821-4f48-b11d-c91929576125 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:19:28 np0005540825 nova_compute[256151]: 2025-12-01 10:19:28.830 256155 DEBUG oslo_concurrency.lockutils [None req-65fc2036-d821-4f48-b11d-c91929576125 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:19:28 np0005540825 podman[269700]: 2025-12-01 10:19:28.744193542 +0000 UTC m=+0.038632947 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:19:28 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:19:28 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06f51ff72b4987f4591d6e11e8fdf6a538c4e7ba1c258d81d401d8cdc5e7568d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:19:28 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06f51ff72b4987f4591d6e11e8fdf6a538c4e7ba1c258d81d401d8cdc5e7568d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:19:28 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06f51ff72b4987f4591d6e11e8fdf6a538c4e7ba1c258d81d401d8cdc5e7568d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:19:28 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06f51ff72b4987f4591d6e11e8fdf6a538c4e7ba1c258d81d401d8cdc5e7568d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:19:28 np0005540825 podman[269700]: 2025-12-01 10:19:28.87880136 +0000 UTC m=+0.173240765 container init f1891de11916cf17d56778bd04ad25bcca2e4c6299004eac158e7b8e84d3ce5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_agnesi, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True)
Dec  1 05:19:28 np0005540825 podman[269700]: 2025-12-01 10:19:28.894453008 +0000 UTC m=+0.188892333 container start f1891de11916cf17d56778bd04ad25bcca2e4c6299004eac158e7b8e84d3ce5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_agnesi, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:19:28 np0005540825 nova_compute[256151]: 2025-12-01 10:19:28.897 256155 DEBUG oslo_concurrency.processutils [None req-65fc2036-d821-4f48-b11d-c91929576125 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:19:28 np0005540825 podman[269700]: 2025-12-01 10:19:28.898738889 +0000 UTC m=+0.193178284 container attach f1891de11916cf17d56778bd04ad25bcca2e4c6299004eac158e7b8e84d3ce5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_agnesi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  1 05:19:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:19:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:19:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:19:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:29 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:19:29 np0005540825 nova_compute[256151]: 2025-12-01 10:19:29.024 256155 DEBUG nova.compute.manager [req-aa23a2d2-7b4c-4eb4-8c90-cd0457f7dac7 req-c697928e-134e-4778-ab96-4eb408b9b15f dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Received event network-vif-plugged-00708a0f-61a2-499a-8116-e51af4ea857a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:19:29 np0005540825 nova_compute[256151]: 2025-12-01 10:19:29.025 256155 DEBUG oslo_concurrency.lockutils [req-aa23a2d2-7b4c-4eb4-8c90-cd0457f7dac7 req-c697928e-134e-4778-ab96-4eb408b9b15f dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "e1cf90f4-8776-435c-9045-5e998a50cf01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:19:29 np0005540825 nova_compute[256151]: 2025-12-01 10:19:29.025 256155 DEBUG oslo_concurrency.lockutils [req-aa23a2d2-7b4c-4eb4-8c90-cd0457f7dac7 req-c697928e-134e-4778-ab96-4eb408b9b15f dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "e1cf90f4-8776-435c-9045-5e998a50cf01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:19:29 np0005540825 nova_compute[256151]: 2025-12-01 10:19:29.026 256155 DEBUG oslo_concurrency.lockutils [req-aa23a2d2-7b4c-4eb4-8c90-cd0457f7dac7 req-c697928e-134e-4778-ab96-4eb408b9b15f dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "e1cf90f4-8776-435c-9045-5e998a50cf01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:19:29 np0005540825 nova_compute[256151]: 2025-12-01 10:19:29.026 256155 DEBUG nova.compute.manager [req-aa23a2d2-7b4c-4eb4-8c90-cd0457f7dac7 req-c697928e-134e-4778-ab96-4eb408b9b15f dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] No waiting events found dispatching network-vif-plugged-00708a0f-61a2-499a-8116-e51af4ea857a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 05:19:29 np0005540825 nova_compute[256151]: 2025-12-01 10:19:29.027 256155 WARNING nova.compute.manager [req-aa23a2d2-7b4c-4eb4-8c90-cd0457f7dac7 req-c697928e-134e-4778-ab96-4eb408b9b15f dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Received unexpected event network-vif-plugged-00708a0f-61a2-499a-8116-e51af4ea857a for instance with vm_state deleted and task_state None.#033[00m
Dec  1 05:19:29 np0005540825 nova_compute[256151]: 2025-12-01 10:19:29.027 256155 DEBUG nova.compute.manager [req-aa23a2d2-7b4c-4eb4-8c90-cd0457f7dac7 req-c697928e-134e-4778-ab96-4eb408b9b15f dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Received event network-vif-deleted-00708a0f-61a2-499a-8116-e51af4ea857a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:19:29 np0005540825 mystifying_agnesi[269717]: {
Dec  1 05:19:29 np0005540825 mystifying_agnesi[269717]:    "1": [
Dec  1 05:19:29 np0005540825 mystifying_agnesi[269717]:        {
Dec  1 05:19:29 np0005540825 mystifying_agnesi[269717]:            "devices": [
Dec  1 05:19:29 np0005540825 mystifying_agnesi[269717]:                "/dev/loop3"
Dec  1 05:19:29 np0005540825 mystifying_agnesi[269717]:            ],
Dec  1 05:19:29 np0005540825 mystifying_agnesi[269717]:            "lv_name": "ceph_lv0",
Dec  1 05:19:29 np0005540825 mystifying_agnesi[269717]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:19:29 np0005540825 mystifying_agnesi[269717]:            "lv_size": "21470642176",
Dec  1 05:19:29 np0005540825 mystifying_agnesi[269717]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 05:19:29 np0005540825 mystifying_agnesi[269717]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:19:29 np0005540825 mystifying_agnesi[269717]:            "name": "ceph_lv0",
Dec  1 05:19:29 np0005540825 mystifying_agnesi[269717]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:19:29 np0005540825 mystifying_agnesi[269717]:            "tags": {
Dec  1 05:19:29 np0005540825 mystifying_agnesi[269717]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:19:29 np0005540825 mystifying_agnesi[269717]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:19:29 np0005540825 mystifying_agnesi[269717]:                "ceph.cephx_lockbox_secret": "",
Dec  1 05:19:29 np0005540825 mystifying_agnesi[269717]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 05:19:29 np0005540825 mystifying_agnesi[269717]:                "ceph.cluster_name": "ceph",
Dec  1 05:19:29 np0005540825 mystifying_agnesi[269717]:                "ceph.crush_device_class": "",
Dec  1 05:19:29 np0005540825 mystifying_agnesi[269717]:                "ceph.encrypted": "0",
Dec  1 05:19:29 np0005540825 mystifying_agnesi[269717]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 05:19:29 np0005540825 mystifying_agnesi[269717]:                "ceph.osd_id": "1",
Dec  1 05:19:29 np0005540825 mystifying_agnesi[269717]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 05:19:29 np0005540825 mystifying_agnesi[269717]:                "ceph.type": "block",
Dec  1 05:19:29 np0005540825 mystifying_agnesi[269717]:                "ceph.vdo": "0",
Dec  1 05:19:29 np0005540825 mystifying_agnesi[269717]:                "ceph.with_tpm": "0"
Dec  1 05:19:29 np0005540825 mystifying_agnesi[269717]:            },
Dec  1 05:19:29 np0005540825 mystifying_agnesi[269717]:            "type": "block",
Dec  1 05:19:29 np0005540825 mystifying_agnesi[269717]:            "vg_name": "ceph_vg0"
Dec  1 05:19:29 np0005540825 mystifying_agnesi[269717]:        }
Dec  1 05:19:29 np0005540825 mystifying_agnesi[269717]:    ]
Dec  1 05:19:29 np0005540825 mystifying_agnesi[269717]: }
Dec  1 05:19:29 np0005540825 systemd[1]: libpod-f1891de11916cf17d56778bd04ad25bcca2e4c6299004eac158e7b8e84d3ce5b.scope: Deactivated successfully.
Dec  1 05:19:29 np0005540825 conmon[269717]: conmon f1891de11916cf17d567 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f1891de11916cf17d56778bd04ad25bcca2e4c6299004eac158e7b8e84d3ce5b.scope/container/memory.events
Dec  1 05:19:29 np0005540825 podman[269700]: 2025-12-01 10:19:29.252121688 +0000 UTC m=+0.546561013 container died f1891de11916cf17d56778bd04ad25bcca2e4c6299004eac158e7b8e84d3ce5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:19:29 np0005540825 systemd[1]: var-lib-containers-storage-overlay-06f51ff72b4987f4591d6e11e8fdf6a538c4e7ba1c258d81d401d8cdc5e7568d-merged.mount: Deactivated successfully.
Dec  1 05:19:29 np0005540825 podman[269700]: 2025-12-01 10:19:29.30207645 +0000 UTC m=+0.596515775 container remove f1891de11916cf17d56778bd04ad25bcca2e4c6299004eac158e7b8e84d3ce5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec  1 05:19:29 np0005540825 systemd[1]: libpod-conmon-f1891de11916cf17d56778bd04ad25bcca2e4c6299004eac158e7b8e84d3ce5b.scope: Deactivated successfully.
Dec  1 05:19:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:19:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2475863935' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:19:29 np0005540825 nova_compute[256151]: 2025-12-01 10:19:29.433 256155 DEBUG oslo_concurrency.processutils [None req-65fc2036-d821-4f48-b11d-c91929576125 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:19:29 np0005540825 nova_compute[256151]: 2025-12-01 10:19:29.443 256155 DEBUG nova.compute.provider_tree [None req-65fc2036-d821-4f48-b11d-c91929576125 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Inventory has not changed in ProviderTree for provider: 5efe20fe-1981-4bd9-8786-d9fddc89a5ae update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 05:19:29 np0005540825 nova_compute[256151]: 2025-12-01 10:19:29.460 256155 DEBUG nova.scheduler.client.report [None req-65fc2036-d821-4f48-b11d-c91929576125 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Inventory has not changed for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 05:19:29 np0005540825 nova_compute[256151]: 2025-12-01 10:19:29.480 256155 DEBUG oslo_concurrency.lockutils [None req-65fc2036-d821-4f48-b11d-c91929576125 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:19:29 np0005540825 nova_compute[256151]: 2025-12-01 10:19:29.509 256155 INFO nova.scheduler.client.report [None req-65fc2036-d821-4f48-b11d-c91929576125 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Deleted allocations for instance e1cf90f4-8776-435c-9045-5e998a50cf01#033[00m
Dec  1 05:19:29 np0005540825 nova_compute[256151]: 2025-12-01 10:19:29.584 256155 DEBUG oslo_concurrency.lockutils [None req-65fc2036-d821-4f48-b11d-c91929576125 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "e1cf90f4-8776-435c-9045-5e998a50cf01" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:19:29 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v866: 353 pgs: 353 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec  1 05:19:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:19:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:19:29.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:19:30 np0005540825 podman[269852]: 2025-12-01 10:19:30.032586424 +0000 UTC m=+0.078981459 container create 8e19b7768885d3976f0bfd154ebd54b3d43c0b8f74ebdf3210f000d0243571fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_ishizaka, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  1 05:19:30 np0005540825 systemd[1]: Started libpod-conmon-8e19b7768885d3976f0bfd154ebd54b3d43c0b8f74ebdf3210f000d0243571fa.scope.
Dec  1 05:19:30 np0005540825 podman[269852]: 2025-12-01 10:19:30.003067945 +0000 UTC m=+0.049463030 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:19:30 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:19:30 np0005540825 podman[269852]: 2025-12-01 10:19:30.129942431 +0000 UTC m=+0.176337466 container init 8e19b7768885d3976f0bfd154ebd54b3d43c0b8f74ebdf3210f000d0243571fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Dec  1 05:19:30 np0005540825 podman[269852]: 2025-12-01 10:19:30.136162373 +0000 UTC m=+0.182557388 container start 8e19b7768885d3976f0bfd154ebd54b3d43c0b8f74ebdf3210f000d0243571fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_ishizaka, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:19:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:19:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:19:30.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:19:30 np0005540825 podman[269852]: 2025-12-01 10:19:30.139596932 +0000 UTC m=+0.185992017 container attach 8e19b7768885d3976f0bfd154ebd54b3d43c0b8f74ebdf3210f000d0243571fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_ishizaka, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325)
Dec  1 05:19:30 np0005540825 recursing_ishizaka[269868]: 167 167
Dec  1 05:19:30 np0005540825 systemd[1]: libpod-8e19b7768885d3976f0bfd154ebd54b3d43c0b8f74ebdf3210f000d0243571fa.scope: Deactivated successfully.
Dec  1 05:19:30 np0005540825 podman[269852]: 2025-12-01 10:19:30.142799636 +0000 UTC m=+0.189194621 container died 8e19b7768885d3976f0bfd154ebd54b3d43c0b8f74ebdf3210f000d0243571fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_ishizaka, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:19:30 np0005540825 systemd[1]: var-lib-containers-storage-overlay-7e771db8845677236bda61c56c46ba82efa60d48d23ad28f30d4b9d2ba43eea3-merged.mount: Deactivated successfully.
Dec  1 05:19:30 np0005540825 podman[269852]: 2025-12-01 10:19:30.184043341 +0000 UTC m=+0.230438376 container remove 8e19b7768885d3976f0bfd154ebd54b3d43c0b8f74ebdf3210f000d0243571fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_ishizaka, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True)
Dec  1 05:19:30 np0005540825 systemd[1]: libpod-conmon-8e19b7768885d3976f0bfd154ebd54b3d43c0b8f74ebdf3210f000d0243571fa.scope: Deactivated successfully.
Dec  1 05:19:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:19:30 np0005540825 podman[269891]: 2025-12-01 10:19:30.42079677 +0000 UTC m=+0.071249148 container create 090039cee91469edee9df2add52d65aa8641ed342208b1b8bab27c0e74d481a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_cori, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  1 05:19:30 np0005540825 systemd[1]: Started libpod-conmon-090039cee91469edee9df2add52d65aa8641ed342208b1b8bab27c0e74d481a2.scope.
Dec  1 05:19:30 np0005540825 podman[269891]: 2025-12-01 10:19:30.393989961 +0000 UTC m=+0.044442419 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:19:30 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:19:30 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b473cfca56c1f74f3b40515448246fd5b53f3fcfbec061d7d852bb24307e9c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:19:30 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b473cfca56c1f74f3b40515448246fd5b53f3fcfbec061d7d852bb24307e9c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:19:30 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b473cfca56c1f74f3b40515448246fd5b53f3fcfbec061d7d852bb24307e9c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:19:30 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b473cfca56c1f74f3b40515448246fd5b53f3fcfbec061d7d852bb24307e9c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:19:30 np0005540825 podman[269891]: 2025-12-01 10:19:30.528411494 +0000 UTC m=+0.178863942 container init 090039cee91469edee9df2add52d65aa8641ed342208b1b8bab27c0e74d481a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:19:30 np0005540825 podman[269891]: 2025-12-01 10:19:30.546559207 +0000 UTC m=+0.197011615 container start 090039cee91469edee9df2add52d65aa8641ed342208b1b8bab27c0e74d481a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_cori, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  1 05:19:30 np0005540825 podman[269891]: 2025-12-01 10:19:30.551472305 +0000 UTC m=+0.201924703 container attach 090039cee91469edee9df2add52d65aa8641ed342208b1b8bab27c0e74d481a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:19:30 np0005540825 podman[269905]: 2025-12-01 10:19:30.579418933 +0000 UTC m=+0.110603873 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:19:30 np0005540825 nova_compute[256151]: 2025-12-01 10:19:30.753 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:31 np0005540825 nova_compute[256151]: 2025-12-01 10:19:31.266 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:31 np0005540825 lvm[270001]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 05:19:31 np0005540825 lvm[270001]: VG ceph_vg0 finished
Dec  1 05:19:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:19:31] "GET /metrics HTTP/1.1" 200 48556 "" "Prometheus/2.51.0"
Dec  1 05:19:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:19:31] "GET /metrics HTTP/1.1" 200 48556 "" "Prometheus/2.51.0"
Dec  1 05:19:31 np0005540825 zealous_cori[269908]: {}
Dec  1 05:19:31 np0005540825 lvm[270004]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 05:19:31 np0005540825 lvm[270004]: VG ceph_vg0 finished
Dec  1 05:19:31 np0005540825 systemd[1]: libpod-090039cee91469edee9df2add52d65aa8641ed342208b1b8bab27c0e74d481a2.scope: Deactivated successfully.
Dec  1 05:19:31 np0005540825 podman[269891]: 2025-12-01 10:19:31.421113736 +0000 UTC m=+1.071566144 container died 090039cee91469edee9df2add52d65aa8641ed342208b1b8bab27c0e74d481a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_cori, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:19:31 np0005540825 systemd[1]: libpod-090039cee91469edee9df2add52d65aa8641ed342208b1b8bab27c0e74d481a2.scope: Consumed 1.572s CPU time.
Dec  1 05:19:31 np0005540825 systemd[1]: var-lib-containers-storage-overlay-e8b473cfca56c1f74f3b40515448246fd5b53f3fcfbec061d7d852bb24307e9c-merged.mount: Deactivated successfully.
Dec  1 05:19:31 np0005540825 podman[269891]: 2025-12-01 10:19:31.470352579 +0000 UTC m=+1.120804957 container remove 090039cee91469edee9df2add52d65aa8641ed342208b1b8bab27c0e74d481a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  1 05:19:31 np0005540825 systemd[1]: libpod-conmon-090039cee91469edee9df2add52d65aa8641ed342208b1b8bab27c0e74d481a2.scope: Deactivated successfully.
Dec  1 05:19:31 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 05:19:31 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:19:31 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 05:19:31 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:19:31 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v867: 353 pgs: 353 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Dec  1 05:19:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:19:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:19:31.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:19:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:19:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:19:32.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:19:32 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:19:32 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:19:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:19:33.662Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:19:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:19:33.663Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:19:33 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v868: 353 pgs: 353 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 28 op/s
Dec  1 05:19:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:19:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:19:33.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:19:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:19:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:19:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:19:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:34 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:19:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:19:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:19:34.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:19:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:19:35 np0005540825 nova_compute[256151]: 2025-12-01 10:19:35.795 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:35 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v869: 353 pgs: 353 active+clean; 66 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 15 KiB/s wr, 55 op/s
Dec  1 05:19:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:19:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:19:35.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:19:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:19:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:19:36.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:19:36 np0005540825 nova_compute[256151]: 2025-12-01 10:19:36.270 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:19:37.221Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:19:37 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Dec  1 05:19:37 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:19:37.393497) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  1 05:19:37 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Dec  1 05:19:37 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584377393786, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2121, "num_deletes": 251, "total_data_size": 4255985, "memory_usage": 4339984, "flush_reason": "Manual Compaction"}
Dec  1 05:19:37 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Dec  1 05:19:37 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584377423671, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 4091309, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24883, "largest_seqno": 27003, "table_properties": {"data_size": 4081955, "index_size": 5848, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19577, "raw_average_key_size": 20, "raw_value_size": 4063125, "raw_average_value_size": 4206, "num_data_blocks": 256, "num_entries": 966, "num_filter_entries": 966, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764584175, "oldest_key_time": 1764584175, "file_creation_time": 1764584377, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Dec  1 05:19:37 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 30244 microseconds, and 14858 cpu microseconds.
Dec  1 05:19:37 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 05:19:37 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:19:37.423740) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 4091309 bytes OK
Dec  1 05:19:37 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:19:37.423770) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Dec  1 05:19:37 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:19:37.425672) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Dec  1 05:19:37 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:19:37.425688) EVENT_LOG_v1 {"time_micros": 1764584377425683, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  1 05:19:37 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:19:37.425715) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  1 05:19:37 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 4247391, prev total WAL file size 4247391, number of live WAL files 2.
Dec  1 05:19:37 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:19:37 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:19:37.426951) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Dec  1 05:19:37 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  1 05:19:37 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(3995KB)], [56(12MB)]
Dec  1 05:19:37 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584377426999, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 17352100, "oldest_snapshot_seqno": -1}
Dec  1 05:19:37 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5878 keys, 15145273 bytes, temperature: kUnknown
Dec  1 05:19:37 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584377515431, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 15145273, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15104876, "index_size": 24607, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14725, "raw_key_size": 149254, "raw_average_key_size": 25, "raw_value_size": 14997504, "raw_average_value_size": 2551, "num_data_blocks": 1006, "num_entries": 5878, "num_filter_entries": 5878, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764582410, "oldest_key_time": 0, "file_creation_time": 1764584377, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Dec  1 05:19:37 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 05:19:37 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:19:37.515793) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 15145273 bytes
Dec  1 05:19:37 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:19:37.517555) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 196.0 rd, 171.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 12.6 +0.0 blob) out(14.4 +0.0 blob), read-write-amplify(7.9) write-amplify(3.7) OK, records in: 6398, records dropped: 520 output_compression: NoCompression
Dec  1 05:19:37 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:19:37.517592) EVENT_LOG_v1 {"time_micros": 1764584377517572, "job": 30, "event": "compaction_finished", "compaction_time_micros": 88538, "compaction_time_cpu_micros": 33969, "output_level": 6, "num_output_files": 1, "total_output_size": 15145273, "num_input_records": 6398, "num_output_records": 5878, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  1 05:19:37 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:19:37 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584377519170, "job": 30, "event": "table_file_deletion", "file_number": 58}
Dec  1 05:19:37 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:19:37 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584377523675, "job": 30, "event": "table_file_deletion", "file_number": 56}
Dec  1 05:19:37 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:19:37.426842) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:19:37 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:19:37.523787) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:19:37 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:19:37.523794) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:19:37 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:19:37.523796) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:19:37 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:19:37.523799) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:19:37 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:19:37.523802) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:19:37 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v870: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 2.7 KiB/s wr, 56 op/s
Dec  1 05:19:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:19:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:19:37.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:19:38 np0005540825 nova_compute[256151]: 2025-12-01 10:19:38.119 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:19:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:19:38.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:19:38 np0005540825 nova_compute[256151]: 2025-12-01 10:19:38.234 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:19:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:19:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:19:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:39 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_10:19:39
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', '.rgw.root', '.nfs', 'default.rgw.meta', 'vms', '.mgr', 'backups', 'volumes', 'default.rgw.log']
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 05:19:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:19:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v871: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 05:19:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:19:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:19:39.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:19:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:19:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:19:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:19:40.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:19:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:19:40 np0005540825 nova_compute[256151]: 2025-12-01 10:19:40.830 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:41 np0005540825 nova_compute[256151]: 2025-12-01 10:19:41.207 256155 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764584366.2065425, e1cf90f4-8776-435c-9045-5e998a50cf01 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 05:19:41 np0005540825 nova_compute[256151]: 2025-12-01 10:19:41.208 256155 INFO nova.compute.manager [-] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] VM Stopped (Lifecycle Event)#033[00m
Dec  1 05:19:41 np0005540825 nova_compute[256151]: 2025-12-01 10:19:41.272 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:41 np0005540825 podman[270055]: 2025-12-01 10:19:41.296215845 +0000 UTC m=+0.151829317 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  1 05:19:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:19:41] "GET /metrics HTTP/1.1" 200 48543 "" "Prometheus/2.51.0"
Dec  1 05:19:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:19:41] "GET /metrics HTTP/1.1" 200 48543 "" "Prometheus/2.51.0"
Dec  1 05:19:41 np0005540825 nova_compute[256151]: 2025-12-01 10:19:41.801 256155 DEBUG nova.compute.manager [None req-8185f464-1708-4566-9840-454ddcdb0a94 - - - - - -] [instance: e1cf90f4-8776-435c-9045-5e998a50cf01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 05:19:41 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v872: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 56 op/s
Dec  1 05:19:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:19:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:19:41.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:19:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:19:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:19:42.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:19:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:19:43.664Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:19:43 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v873: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec  1 05:19:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:19:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:19:43.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:19:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:19:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:19:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:19:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:44 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:19:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:19:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:19:44.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:19:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:19:45 np0005540825 nova_compute[256151]: 2025-12-01 10:19:45.831 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:45 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v874: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec  1 05:19:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:19:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:19:45.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:19:46 np0005540825 nova_compute[256151]: 2025-12-01 10:19:46.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:19:46 np0005540825 nova_compute[256151]: 2025-12-01 10:19:46.027 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  1 05:19:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:19:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:19:46.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:19:46 np0005540825 nova_compute[256151]: 2025-12-01 10:19:46.275 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:19:47.221Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:19:47 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v875: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 B/s wr, 1 op/s
Dec  1 05:19:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:19:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:19:47.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:19:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:19:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:19:48.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:19:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:19:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:49 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:19:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:49 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:19:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:49 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:19:49 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v876: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:19:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:19:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:19:49.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:19:50 np0005540825 nova_compute[256151]: 2025-12-01 10:19:50.129 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:19:50 np0005540825 nova_compute[256151]: 2025-12-01 10:19:50.129 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 05:19:50 np0005540825 nova_compute[256151]: 2025-12-01 10:19:50.130 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 05:19:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:19:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:19:50.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:19:50 np0005540825 nova_compute[256151]: 2025-12-01 10:19:50.166 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 05:19:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:19:50 np0005540825 nova_compute[256151]: 2025-12-01 10:19:50.835 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:51 np0005540825 nova_compute[256151]: 2025-12-01 10:19:51.060 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:19:51 np0005540825 nova_compute[256151]: 2025-12-01 10:19:51.061 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:19:51 np0005540825 nova_compute[256151]: 2025-12-01 10:19:51.277 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:19:51] "GET /metrics HTTP/1.1" 200 48543 "" "Prometheus/2.51.0"
Dec  1 05:19:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:19:51] "GET /metrics HTTP/1.1" 200 48543 "" "Prometheus/2.51.0"
Dec  1 05:19:51 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v877: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:19:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:19:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:19:51.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:19:52 np0005540825 nova_compute[256151]: 2025-12-01 10:19:52.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:19:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:19:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:19:52.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:19:53 np0005540825 nova_compute[256151]: 2025-12-01 10:19:53.026 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:19:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:19:53.666Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:19:53 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v878: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:19:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:19:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:19:53.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:19:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:19:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:19:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:54 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:19:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:19:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:19:54.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:19:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:54 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:19:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:19:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:19:55 np0005540825 nova_compute[256151]: 2025-12-01 10:19:55.026 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:19:55 np0005540825 nova_compute[256151]: 2025-12-01 10:19:55.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:19:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:19:55 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v879: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:19:55 np0005540825 nova_compute[256151]: 2025-12-01 10:19:55.882 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:19:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:19:55.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:19:56 np0005540825 nova_compute[256151]: 2025-12-01 10:19:56.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:19:56 np0005540825 nova_compute[256151]: 2025-12-01 10:19:56.027 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 05:19:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:19:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:19:56.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:19:56 np0005540825 podman[270122]: 2025-12-01 10:19:56.227459914 +0000 UTC m=+0.085270093 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  1 05:19:56 np0005540825 nova_compute[256151]: 2025-12-01 10:19:56.279 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:19:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:19:57.222Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:19:57 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v880: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:19:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:19:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:19:57.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:19:58 np0005540825 nova_compute[256151]: 2025-12-01 10:19:58.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:19:58 np0005540825 nova_compute[256151]: 2025-12-01 10:19:58.028 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  1 05:19:58 np0005540825 nova_compute[256151]: 2025-12-01 10:19:58.052 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  1 05:19:58 np0005540825 nova_compute[256151]: 2025-12-01 10:19:58.052 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:19:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:19:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:19:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:19:58.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:19:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:19:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:19:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:19:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:19:59 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:19:59 np0005540825 nova_compute[256151]: 2025-12-01 10:19:59.082 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:19:59 np0005540825 nova_compute[256151]: 2025-12-01 10:19:59.112 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:19:59 np0005540825 nova_compute[256151]: 2025-12-01 10:19:59.113 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:19:59 np0005540825 nova_compute[256151]: 2025-12-01 10:19:59.113 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:19:59 np0005540825 nova_compute[256151]: 2025-12-01 10:19:59.113 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 05:19:59 np0005540825 nova_compute[256151]: 2025-12-01 10:19:59.114 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:19:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:19:59 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3861310351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:19:59 np0005540825 nova_compute[256151]: 2025-12-01 10:19:59.559 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:19:59 np0005540825 nova_compute[256151]: 2025-12-01 10:19:59.817 256155 WARNING nova.virt.libvirt.driver [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 05:19:59 np0005540825 nova_compute[256151]: 2025-12-01 10:19:59.819 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4592MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 05:19:59 np0005540825 nova_compute[256151]: 2025-12-01 10:19:59.819 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:19:59 np0005540825 nova_compute[256151]: 2025-12-01 10:19:59.820 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:19:59 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v881: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:19:59 np0005540825 nova_compute[256151]: 2025-12-01 10:19:59.914 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 05:19:59 np0005540825 nova_compute[256151]: 2025-12-01 10:19:59.915 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 05:19:59 np0005540825 nova_compute[256151]: 2025-12-01 10:19:59.970 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:20:00 np0005540825 ceph-mon[74416]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore
Dec  1 05:20:00 np0005540825 ceph-mon[74416]: log_channel(cluster) log [WRN] : [WRN] BLUESTORE_SLOW_OP_ALERT: 1 OSD(s) experiencing slow operations in BlueStore
Dec  1 05:20:00 np0005540825 ceph-mon[74416]: log_channel(cluster) log [WRN] :      osd.2 observed slow operation indications in BlueStore
Dec  1 05:20:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:20:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:20:00.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:20:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:20:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:20:00.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:20:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:20:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:20:00 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3098762632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:20:00 np0005540825 nova_compute[256151]: 2025-12-01 10:20:00.479 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:20:00 np0005540825 nova_compute[256151]: 2025-12-01 10:20:00.488 256155 DEBUG nova.compute.provider_tree [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed in ProviderTree for provider: 5efe20fe-1981-4bd9-8786-d9fddc89a5ae update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 05:20:00 np0005540825 nova_compute[256151]: 2025-12-01 10:20:00.504 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 05:20:00 np0005540825 nova_compute[256151]: 2025-12-01 10:20:00.528 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 05:20:00 np0005540825 nova_compute[256151]: 2025-12-01 10:20:00.529 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:20:00 np0005540825 ceph-mon[74416]: Health detail: HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore
Dec  1 05:20:00 np0005540825 ceph-mon[74416]: [WRN] BLUESTORE_SLOW_OP_ALERT: 1 OSD(s) experiencing slow operations in BlueStore
Dec  1 05:20:00 np0005540825 ceph-mon[74416]:     osd.2 observed slow operation indications in BlueStore
Dec  1 05:20:00 np0005540825 nova_compute[256151]: 2025-12-01 10:20:00.919 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:20:01 np0005540825 podman[270190]: 2025-12-01 10:20:01.23241087 +0000 UTC m=+0.092593644 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  1 05:20:01 np0005540825 nova_compute[256151]: 2025-12-01 10:20:01.281 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:20:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:20:01] "GET /metrics HTTP/1.1" 200 48545 "" "Prometheus/2.51.0"
Dec  1 05:20:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:20:01] "GET /metrics HTTP/1.1" 200 48545 "" "Prometheus/2.51.0"
Dec  1 05:20:01 np0005540825 nova_compute[256151]: 2025-12-01 10:20:01.475 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:20:01 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v882: 353 pgs: 353 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  1 05:20:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:20:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:20:02.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:20:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:20:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:20:02.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:20:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:20:03.667Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:20:03 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v883: 353 pgs: 353 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  1 05:20:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:20:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:20:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:20:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:04 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:20:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:20:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:20:04.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:20:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:20:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:20:04.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:20:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:20:04.577 163291 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:20:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:20:04.578 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:20:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:20:04.578 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:20:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:20:05 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v884: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 83 op/s
Dec  1 05:20:05 np0005540825 nova_compute[256151]: 2025-12-01 10:20:05.964 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:20:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:20:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:20:06.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:20:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:20:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:20:06.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:20:06 np0005540825 nova_compute[256151]: 2025-12-01 10:20:06.283 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:20:07 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:20:07.128 163291 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '36:10:da', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '4e:5c:35:98:90:37'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 05:20:07 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:20:07.130 163291 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 05:20:07 np0005540825 nova_compute[256151]: 2025-12-01 10:20:07.179 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:20:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:20:07.223Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:20:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:20:07.224Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:20:07 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v885: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec  1 05:20:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:20:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:20:08.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:20:08 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:20:08.133 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4d9738cf-2abf-48e2-9303-677669784912, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:20:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:20:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:20:08.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:20:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:20:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:20:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:20:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:09 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:20:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:20:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:20:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:20:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:20:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:20:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:20:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:20:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:20:09 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v886: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec  1 05:20:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:20:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:20:10.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:20:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:20:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:20:10.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:20:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:20:11 np0005540825 nova_compute[256151]: 2025-12-01 10:20:11.002 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:20:11 np0005540825 nova_compute[256151]: 2025-12-01 10:20:11.285 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:20:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:20:11] "GET /metrics HTTP/1.1" 200 48562 "" "Prometheus/2.51.0"
Dec  1 05:20:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:20:11] "GET /metrics HTTP/1.1" 200 48562 "" "Prometheus/2.51.0"
Dec  1 05:20:11 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v887: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec  1 05:20:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:20:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:20:12.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:20:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:20:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:20:12.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:20:12 np0005540825 podman[270247]: 2025-12-01 10:20:12.317599611 +0000 UTC m=+0.174916679 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  1 05:20:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:20:13.668Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:20:13 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v888: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec  1 05:20:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:20:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:20:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:20:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:14 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:20:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:20:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:20:14.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:20:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:20:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:20:14.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:20:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:20:15 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v889: 353 pgs: 353 active+clean; 104 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 86 op/s
Dec  1 05:20:16 np0005540825 nova_compute[256151]: 2025-12-01 10:20:16.004 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:20:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:20:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:20:16.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:20:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.002000052s ======
Dec  1 05:20:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:20:16.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Dec  1 05:20:16 np0005540825 nova_compute[256151]: 2025-12-01 10:20:16.287 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:20:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:20:17.225Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:20:17 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v890: 353 pgs: 353 active+clean; 109 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 448 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Dec  1 05:20:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:20:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:20:18.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:20:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:20:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:20:18.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:20:18 np0005540825 ovn_controller[153404]: 2025-12-01T10:20:18Z|00049|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Dec  1 05:20:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:20:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:20:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:20:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:19 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:20:19 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v891: 353 pgs: 353 active+clean; 109 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 1.8 MiB/s wr, 19 op/s
Dec  1 05:20:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:20:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:20:20.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:20:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:20:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:20:20.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:20:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:20:21 np0005540825 nova_compute[256151]: 2025-12-01 10:20:21.007 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:20:21 np0005540825 nova_compute[256151]: 2025-12-01 10:20:21.289 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:20:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:20:21] "GET /metrics HTTP/1.1" 200 48562 "" "Prometheus/2.51.0"
Dec  1 05:20:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:20:21] "GET /metrics HTTP/1.1" 200 48562 "" "Prometheus/2.51.0"
Dec  1 05:20:21 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v892: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 356 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec  1 05:20:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000025s ======
Dec  1 05:20:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:20:22.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  1 05:20:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:20:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:20:22.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:20:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:20:23.669Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:20:23 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v893: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 355 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec  1 05:20:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:20:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:20:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:20:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:24 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:20:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:20:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:20:24.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:20:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:20:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:20:24.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:20:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:20:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:20:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:20:25 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v894: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 356 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec  1 05:20:26 np0005540825 nova_compute[256151]: 2025-12-01 10:20:26.035 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:20:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:20:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:20:26.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:20:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:20:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:20:26.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:20:26 np0005540825 nova_compute[256151]: 2025-12-01 10:20:26.291 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:20:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:20:27.226Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:20:27 np0005540825 podman[270316]: 2025-12-01 10:20:27.234267902 +0000 UTC m=+0.087878810 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 05:20:27 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v895: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 338 KiB/s rd, 976 KiB/s wr, 53 op/s
Dec  1 05:20:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:20:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:20:28.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:20:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:20:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:20:28.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:20:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:20:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:20:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:29 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:20:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:29 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:20:29 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v896: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 398 KiB/s wr, 46 op/s
Dec  1 05:20:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:20:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:20:30.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:20:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:20:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:20:30.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:20:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:20:31 np0005540825 nova_compute[256151]: 2025-12-01 10:20:31.075 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:20:31 np0005540825 nova_compute[256151]: 2025-12-01 10:20:31.293 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:20:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:20:31] "GET /metrics HTTP/1.1" 200 48561 "" "Prometheus/2.51.0"
Dec  1 05:20:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:20:31] "GET /metrics HTTP/1.1" 200 48561 "" "Prometheus/2.51.0"
Dec  1 05:20:31 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v897: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 400 KiB/s wr, 46 op/s
Dec  1 05:20:32 np0005540825 podman[270365]: 2025-12-01 10:20:32.060405139 +0000 UTC m=+0.057703054 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  1 05:20:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:20:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:20:32.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:20:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:20:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:20:32.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:20:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:20:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:20:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 05:20:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:20:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 05:20:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:20:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 05:20:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:20:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 05:20:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 05:20:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 05:20:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:20:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:20:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:20:33 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:20:33 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:20:33 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:20:33 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:20:33 np0005540825 podman[270532]: 2025-12-01 10:20:33.542650343 +0000 UTC m=+0.076391281 container create f37a017ce30af1f7f76b9de5926cd34bdc298f1963533d692ec359d68a9d7517 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_khayyam, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:20:33 np0005540825 podman[270532]: 2025-12-01 10:20:33.511388629 +0000 UTC m=+0.045129627 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:20:33 np0005540825 systemd[1]: Started libpod-conmon-f37a017ce30af1f7f76b9de5926cd34bdc298f1963533d692ec359d68a9d7517.scope.
Dec  1 05:20:33 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:20:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:20:33.670Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:20:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:20:33.671Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:20:33 np0005540825 podman[270532]: 2025-12-01 10:20:33.686738298 +0000 UTC m=+0.220479236 container init f37a017ce30af1f7f76b9de5926cd34bdc298f1963533d692ec359d68a9d7517 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_khayyam, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:20:33 np0005540825 podman[270532]: 2025-12-01 10:20:33.696836741 +0000 UTC m=+0.230577669 container start f37a017ce30af1f7f76b9de5926cd34bdc298f1963533d692ec359d68a9d7517 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_khayyam, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:20:33 np0005540825 podman[270532]: 2025-12-01 10:20:33.701420601 +0000 UTC m=+0.235161529 container attach f37a017ce30af1f7f76b9de5926cd34bdc298f1963533d692ec359d68a9d7517 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:20:33 np0005540825 priceless_khayyam[270549]: 167 167
Dec  1 05:20:33 np0005540825 podman[270532]: 2025-12-01 10:20:33.704722907 +0000 UTC m=+0.238463845 container died f37a017ce30af1f7f76b9de5926cd34bdc298f1963533d692ec359d68a9d7517 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_khayyam, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  1 05:20:33 np0005540825 systemd[1]: libpod-f37a017ce30af1f7f76b9de5926cd34bdc298f1963533d692ec359d68a9d7517.scope: Deactivated successfully.
Dec  1 05:20:33 np0005540825 systemd[1]: var-lib-containers-storage-overlay-4b74a8828c018ffe7457c84ed5d66c696d35fec45c955a48461f667b54010247-merged.mount: Deactivated successfully.
Dec  1 05:20:33 np0005540825 podman[270532]: 2025-12-01 10:20:33.762543163 +0000 UTC m=+0.296284101 container remove f37a017ce30af1f7f76b9de5926cd34bdc298f1963533d692ec359d68a9d7517 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  1 05:20:33 np0005540825 systemd[1]: libpod-conmon-f37a017ce30af1f7f76b9de5926cd34bdc298f1963533d692ec359d68a9d7517.scope: Deactivated successfully.
Dec  1 05:20:33 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v898: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 14 KiB/s wr, 1 op/s
Dec  1 05:20:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:20:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:34 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:20:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:34 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:20:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:34 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:20:34 np0005540825 podman[270574]: 2025-12-01 10:20:34.017881197 +0000 UTC m=+0.070319213 container create 7aec4373cbcf587f0dfd157f26942fa37005528612d81bacd25409c29a7b60dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:20:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:20:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:20:34.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:20:34 np0005540825 systemd[1]: Started libpod-conmon-7aec4373cbcf587f0dfd157f26942fa37005528612d81bacd25409c29a7b60dd.scope.
Dec  1 05:20:34 np0005540825 podman[270574]: 2025-12-01 10:20:33.990216906 +0000 UTC m=+0.042654962 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:20:34 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:20:34 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/027b4fcdad7400c2dadd1dd2ae3b3dce906adcb3cb709e303516383aa87fd8b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:20:34 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/027b4fcdad7400c2dadd1dd2ae3b3dce906adcb3cb709e303516383aa87fd8b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:20:34 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/027b4fcdad7400c2dadd1dd2ae3b3dce906adcb3cb709e303516383aa87fd8b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:20:34 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/027b4fcdad7400c2dadd1dd2ae3b3dce906adcb3cb709e303516383aa87fd8b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:20:34 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/027b4fcdad7400c2dadd1dd2ae3b3dce906adcb3cb709e303516383aa87fd8b2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:20:34 np0005540825 podman[270574]: 2025-12-01 10:20:34.132464732 +0000 UTC m=+0.184902788 container init 7aec4373cbcf587f0dfd157f26942fa37005528612d81bacd25409c29a7b60dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  1 05:20:34 np0005540825 podman[270574]: 2025-12-01 10:20:34.15156974 +0000 UTC m=+0.204007756 container start 7aec4373cbcf587f0dfd157f26942fa37005528612d81bacd25409c29a7b60dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_williamson, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  1 05:20:34 np0005540825 podman[270574]: 2025-12-01 10:20:34.156053296 +0000 UTC m=+0.208491362 container attach 7aec4373cbcf587f0dfd157f26942fa37005528612d81bacd25409c29a7b60dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec  1 05:20:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:20:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:20:34.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:20:34 np0005540825 strange_williamson[270591]: --> passed data devices: 0 physical, 1 LVM
Dec  1 05:20:34 np0005540825 strange_williamson[270591]: --> All data devices are unavailable
Dec  1 05:20:34 np0005540825 systemd[1]: libpod-7aec4373cbcf587f0dfd157f26942fa37005528612d81bacd25409c29a7b60dd.scope: Deactivated successfully.
Dec  1 05:20:34 np0005540825 podman[270606]: 2025-12-01 10:20:34.656728003 +0000 UTC m=+0.040515057 container died 7aec4373cbcf587f0dfd157f26942fa37005528612d81bacd25409c29a7b60dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_williamson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec  1 05:20:34 np0005540825 systemd[1]: var-lib-containers-storage-overlay-027b4fcdad7400c2dadd1dd2ae3b3dce906adcb3cb709e303516383aa87fd8b2-merged.mount: Deactivated successfully.
Dec  1 05:20:34 np0005540825 podman[270606]: 2025-12-01 10:20:34.715158645 +0000 UTC m=+0.098945669 container remove 7aec4373cbcf587f0dfd157f26942fa37005528612d81bacd25409c29a7b60dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_williamson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True)
Dec  1 05:20:34 np0005540825 systemd[1]: libpod-conmon-7aec4373cbcf587f0dfd157f26942fa37005528612d81bacd25409c29a7b60dd.scope: Deactivated successfully.
Dec  1 05:20:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:20:35 np0005540825 podman[270713]: 2025-12-01 10:20:35.562757592 +0000 UTC m=+0.065127088 container create 3476a7e5b6bc1a8b8daee96b212fb9da4a8c592d81542d60ac02df23f61342aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_cerf, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  1 05:20:35 np0005540825 systemd[1]: Started libpod-conmon-3476a7e5b6bc1a8b8daee96b212fb9da4a8c592d81542d60ac02df23f61342aa.scope.
Dec  1 05:20:35 np0005540825 podman[270713]: 2025-12-01 10:20:35.540886422 +0000 UTC m=+0.043256018 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:20:35 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:20:35 np0005540825 podman[270713]: 2025-12-01 10:20:35.676001733 +0000 UTC m=+0.178371319 container init 3476a7e5b6bc1a8b8daee96b212fb9da4a8c592d81542d60ac02df23f61342aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_cerf, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec  1 05:20:35 np0005540825 podman[270713]: 2025-12-01 10:20:35.688123999 +0000 UTC m=+0.190493535 container start 3476a7e5b6bc1a8b8daee96b212fb9da4a8c592d81542d60ac02df23f61342aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:20:35 np0005540825 podman[270713]: 2025-12-01 10:20:35.69200227 +0000 UTC m=+0.194371806 container attach 3476a7e5b6bc1a8b8daee96b212fb9da4a8c592d81542d60ac02df23f61342aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  1 05:20:35 np0005540825 stoic_cerf[270731]: 167 167
Dec  1 05:20:35 np0005540825 systemd[1]: libpod-3476a7e5b6bc1a8b8daee96b212fb9da4a8c592d81542d60ac02df23f61342aa.scope: Deactivated successfully.
Dec  1 05:20:35 np0005540825 podman[270713]: 2025-12-01 10:20:35.695728617 +0000 UTC m=+0.198098143 container died 3476a7e5b6bc1a8b8daee96b212fb9da4a8c592d81542d60ac02df23f61342aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_cerf, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:20:35 np0005540825 systemd[1]: var-lib-containers-storage-overlay-7c060879a1ee041f11653dba59de84e4576481606d87cedccb6c7b5f9dd80cac-merged.mount: Deactivated successfully.
Dec  1 05:20:35 np0005540825 podman[270713]: 2025-12-01 10:20:35.751651994 +0000 UTC m=+0.254021520 container remove 3476a7e5b6bc1a8b8daee96b212fb9da4a8c592d81542d60ac02df23f61342aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_cerf, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:20:35 np0005540825 systemd[1]: libpod-conmon-3476a7e5b6bc1a8b8daee96b212fb9da4a8c592d81542d60ac02df23f61342aa.scope: Deactivated successfully.
Dec  1 05:20:35 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v899: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 14 KiB/s wr, 1 op/s
Dec  1 05:20:35 np0005540825 podman[270758]: 2025-12-01 10:20:35.976658187 +0000 UTC m=+0.062885899 container create e8074fb99d089cbfdad347202f4a8e3b3b6ebb5196772630260230fe3abcdbd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  1 05:20:36 np0005540825 systemd[1]: Started libpod-conmon-e8074fb99d089cbfdad347202f4a8e3b3b6ebb5196772630260230fe3abcdbd2.scope.
Dec  1 05:20:36 np0005540825 podman[270758]: 2025-12-01 10:20:35.946896442 +0000 UTC m=+0.033124214 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:20:36 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:20:36 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bff44f42df34881e840f18e8202bb12fd6055d4761766a38ebcf3b86af6bf3b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:20:36 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bff44f42df34881e840f18e8202bb12fd6055d4761766a38ebcf3b86af6bf3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:20:36 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bff44f42df34881e840f18e8202bb12fd6055d4761766a38ebcf3b86af6bf3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:20:36 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bff44f42df34881e840f18e8202bb12fd6055d4761766a38ebcf3b86af6bf3b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:20:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:20:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:20:36.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:20:36 np0005540825 podman[270758]: 2025-12-01 10:20:36.122911316 +0000 UTC m=+0.209139008 container init e8074fb99d089cbfdad347202f4a8e3b3b6ebb5196772630260230fe3abcdbd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_yalow, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  1 05:20:36 np0005540825 nova_compute[256151]: 2025-12-01 10:20:36.121 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:20:36 np0005540825 podman[270758]: 2025-12-01 10:20:36.130678549 +0000 UTC m=+0.216906241 container start e8074fb99d089cbfdad347202f4a8e3b3b6ebb5196772630260230fe3abcdbd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_yalow, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:20:36 np0005540825 podman[270758]: 2025-12-01 10:20:36.135176897 +0000 UTC m=+0.221404589 container attach e8074fb99d089cbfdad347202f4a8e3b3b6ebb5196772630260230fe3abcdbd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_yalow, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:20:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:20:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:20:36.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:20:36 np0005540825 nova_compute[256151]: 2025-12-01 10:20:36.295 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:20:36 np0005540825 inspiring_yalow[270774]: {
Dec  1 05:20:36 np0005540825 inspiring_yalow[270774]:    "1": [
Dec  1 05:20:36 np0005540825 inspiring_yalow[270774]:        {
Dec  1 05:20:36 np0005540825 inspiring_yalow[270774]:            "devices": [
Dec  1 05:20:36 np0005540825 inspiring_yalow[270774]:                "/dev/loop3"
Dec  1 05:20:36 np0005540825 inspiring_yalow[270774]:            ],
Dec  1 05:20:36 np0005540825 inspiring_yalow[270774]:            "lv_name": "ceph_lv0",
Dec  1 05:20:36 np0005540825 inspiring_yalow[270774]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:20:36 np0005540825 inspiring_yalow[270774]:            "lv_size": "21470642176",
Dec  1 05:20:36 np0005540825 inspiring_yalow[270774]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 05:20:36 np0005540825 inspiring_yalow[270774]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:20:36 np0005540825 inspiring_yalow[270774]:            "name": "ceph_lv0",
Dec  1 05:20:36 np0005540825 inspiring_yalow[270774]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:20:36 np0005540825 inspiring_yalow[270774]:            "tags": {
Dec  1 05:20:36 np0005540825 inspiring_yalow[270774]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:20:36 np0005540825 inspiring_yalow[270774]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:20:36 np0005540825 inspiring_yalow[270774]:                "ceph.cephx_lockbox_secret": "",
Dec  1 05:20:36 np0005540825 inspiring_yalow[270774]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 05:20:36 np0005540825 inspiring_yalow[270774]:                "ceph.cluster_name": "ceph",
Dec  1 05:20:36 np0005540825 inspiring_yalow[270774]:                "ceph.crush_device_class": "",
Dec  1 05:20:36 np0005540825 inspiring_yalow[270774]:                "ceph.encrypted": "0",
Dec  1 05:20:36 np0005540825 inspiring_yalow[270774]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 05:20:36 np0005540825 inspiring_yalow[270774]:                "ceph.osd_id": "1",
Dec  1 05:20:36 np0005540825 inspiring_yalow[270774]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 05:20:36 np0005540825 inspiring_yalow[270774]:                "ceph.type": "block",
Dec  1 05:20:36 np0005540825 inspiring_yalow[270774]:                "ceph.vdo": "0",
Dec  1 05:20:36 np0005540825 inspiring_yalow[270774]:                "ceph.with_tpm": "0"
Dec  1 05:20:36 np0005540825 inspiring_yalow[270774]:            },
Dec  1 05:20:36 np0005540825 inspiring_yalow[270774]:            "type": "block",
Dec  1 05:20:36 np0005540825 inspiring_yalow[270774]:            "vg_name": "ceph_vg0"
Dec  1 05:20:36 np0005540825 inspiring_yalow[270774]:        }
Dec  1 05:20:36 np0005540825 inspiring_yalow[270774]:    ]
Dec  1 05:20:36 np0005540825 inspiring_yalow[270774]: }
Dec  1 05:20:36 np0005540825 systemd[1]: libpod-e8074fb99d089cbfdad347202f4a8e3b3b6ebb5196772630260230fe3abcdbd2.scope: Deactivated successfully.
Dec  1 05:20:36 np0005540825 podman[270758]: 2025-12-01 10:20:36.448059611 +0000 UTC m=+0.534287323 container died e8074fb99d089cbfdad347202f4a8e3b3b6ebb5196772630260230fe3abcdbd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  1 05:20:36 np0005540825 systemd[1]: var-lib-containers-storage-overlay-9bff44f42df34881e840f18e8202bb12fd6055d4761766a38ebcf3b86af6bf3b-merged.mount: Deactivated successfully.
Dec  1 05:20:36 np0005540825 podman[270758]: 2025-12-01 10:20:36.50289733 +0000 UTC m=+0.589125022 container remove e8074fb99d089cbfdad347202f4a8e3b3b6ebb5196772630260230fe3abcdbd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_yalow, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:20:36 np0005540825 systemd[1]: libpod-conmon-e8074fb99d089cbfdad347202f4a8e3b3b6ebb5196772630260230fe3abcdbd2.scope: Deactivated successfully.
Dec  1 05:20:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:20:37.227Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:20:37 np0005540825 podman[270886]: 2025-12-01 10:20:37.246514957 +0000 UTC m=+0.066892694 container create b41e77ecf4c2feb17e45ae2ee3381fb017622ce986d1ed796fad5cff08e0a1b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  1 05:20:37 np0005540825 systemd[1]: Started libpod-conmon-b41e77ecf4c2feb17e45ae2ee3381fb017622ce986d1ed796fad5cff08e0a1b2.scope.
Dec  1 05:20:37 np0005540825 podman[270886]: 2025-12-01 10:20:37.218823245 +0000 UTC m=+0.039201032 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:20:37 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:20:37 np0005540825 nova_compute[256151]: 2025-12-01 10:20:37.339 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:20:37 np0005540825 podman[270886]: 2025-12-01 10:20:37.35446699 +0000 UTC m=+0.174844737 container init b41e77ecf4c2feb17e45ae2ee3381fb017622ce986d1ed796fad5cff08e0a1b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:20:37 np0005540825 podman[270886]: 2025-12-01 10:20:37.36557501 +0000 UTC m=+0.185952747 container start b41e77ecf4c2feb17e45ae2ee3381fb017622ce986d1ed796fad5cff08e0a1b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_vaughan, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:20:37 np0005540825 podman[270886]: 2025-12-01 10:20:37.369976394 +0000 UTC m=+0.190354191 container attach b41e77ecf4c2feb17e45ae2ee3381fb017622ce986d1ed796fad5cff08e0a1b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:20:37 np0005540825 nifty_vaughan[270902]: 167 167
Dec  1 05:20:37 np0005540825 systemd[1]: libpod-b41e77ecf4c2feb17e45ae2ee3381fb017622ce986d1ed796fad5cff08e0a1b2.scope: Deactivated successfully.
Dec  1 05:20:37 np0005540825 podman[270886]: 2025-12-01 10:20:37.374590075 +0000 UTC m=+0.194967812 container died b41e77ecf4c2feb17e45ae2ee3381fb017622ce986d1ed796fad5cff08e0a1b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_vaughan, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:20:37 np0005540825 systemd[1]: var-lib-containers-storage-overlay-b34dc38d67bcb5dd772a2d2e0df2ab2d2d686a52fa1bb45e4d39984a5f931728-merged.mount: Deactivated successfully.
Dec  1 05:20:37 np0005540825 podman[270886]: 2025-12-01 10:20:37.432810852 +0000 UTC m=+0.253188599 container remove b41e77ecf4c2feb17e45ae2ee3381fb017622ce986d1ed796fad5cff08e0a1b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec  1 05:20:37 np0005540825 systemd[1]: libpod-conmon-b41e77ecf4c2feb17e45ae2ee3381fb017622ce986d1ed796fad5cff08e0a1b2.scope: Deactivated successfully.
Dec  1 05:20:37 np0005540825 podman[270928]: 2025-12-01 10:20:37.667678011 +0000 UTC m=+0.062206772 container create ef406ec73fe6f09efb220fd6ee4e492a05e933fd0bb9d7d6ffd5b50dceadc05a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_faraday, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  1 05:20:37 np0005540825 systemd[1]: Started libpod-conmon-ef406ec73fe6f09efb220fd6ee4e492a05e933fd0bb9d7d6ffd5b50dceadc05a.scope.
Dec  1 05:20:37 np0005540825 podman[270928]: 2025-12-01 10:20:37.644281991 +0000 UTC m=+0.038810822 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:20:37 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:20:37 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac1ba98562ca9cdb3abd5821f2b290013148bf34c6b0d59ad3e5bd43832f55c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:20:37 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac1ba98562ca9cdb3abd5821f2b290013148bf34c6b0d59ad3e5bd43832f55c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:20:37 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac1ba98562ca9cdb3abd5821f2b290013148bf34c6b0d59ad3e5bd43832f55c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:20:37 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac1ba98562ca9cdb3abd5821f2b290013148bf34c6b0d59ad3e5bd43832f55c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:20:37 np0005540825 podman[270928]: 2025-12-01 10:20:37.774981927 +0000 UTC m=+0.169510698 container init ef406ec73fe6f09efb220fd6ee4e492a05e933fd0bb9d7d6ffd5b50dceadc05a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_faraday, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:20:37 np0005540825 podman[270928]: 2025-12-01 10:20:37.797038222 +0000 UTC m=+0.191566973 container start ef406ec73fe6f09efb220fd6ee4e492a05e933fd0bb9d7d6ffd5b50dceadc05a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_faraday, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec  1 05:20:37 np0005540825 podman[270928]: 2025-12-01 10:20:37.800007989 +0000 UTC m=+0.194536740 container attach ef406ec73fe6f09efb220fd6ee4e492a05e933fd0bb9d7d6ffd5b50dceadc05a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec  1 05:20:37 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v900: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 2.3 KiB/s wr, 1 op/s
Dec  1 05:20:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:20:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:20:38.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:20:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:20:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:20:38.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:20:38 np0005540825 lvm[271020]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 05:20:38 np0005540825 lvm[271020]: VG ceph_vg0 finished
Dec  1 05:20:38 np0005540825 blissful_faraday[270946]: {}
Dec  1 05:20:38 np0005540825 systemd[1]: libpod-ef406ec73fe6f09efb220fd6ee4e492a05e933fd0bb9d7d6ffd5b50dceadc05a.scope: Deactivated successfully.
Dec  1 05:20:38 np0005540825 podman[270928]: 2025-12-01 10:20:38.574264434 +0000 UTC m=+0.968793205 container died ef406ec73fe6f09efb220fd6ee4e492a05e933fd0bb9d7d6ffd5b50dceadc05a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_faraday, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:20:38 np0005540825 systemd[1]: libpod-ef406ec73fe6f09efb220fd6ee4e492a05e933fd0bb9d7d6ffd5b50dceadc05a.scope: Consumed 1.240s CPU time.
Dec  1 05:20:38 np0005540825 systemd[1]: var-lib-containers-storage-overlay-ac1ba98562ca9cdb3abd5821f2b290013148bf34c6b0d59ad3e5bd43832f55c3-merged.mount: Deactivated successfully.
Dec  1 05:20:38 np0005540825 podman[270928]: 2025-12-01 10:20:38.624174715 +0000 UTC m=+1.018703466 container remove ef406ec73fe6f09efb220fd6ee4e492a05e933fd0bb9d7d6ffd5b50dceadc05a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_faraday, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True)
Dec  1 05:20:38 np0005540825 systemd[1]: libpod-conmon-ef406ec73fe6f09efb220fd6ee4e492a05e933fd0bb9d7d6ffd5b50dceadc05a.scope: Deactivated successfully.
Dec  1 05:20:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 05:20:38 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:20:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 05:20:38 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:20:38 np0005540825 nova_compute[256151]: 2025-12-01 10:20:38.725 256155 DEBUG oslo_concurrency.lockutils [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "f38af490-c2f2-4870-a0c3-c676494aad55" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:20:38 np0005540825 nova_compute[256151]: 2025-12-01 10:20:38.727 256155 DEBUG oslo_concurrency.lockutils [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "f38af490-c2f2-4870-a0c3-c676494aad55" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:20:38 np0005540825 nova_compute[256151]: 2025-12-01 10:20:38.745 256155 DEBUG nova.compute.manager [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 05:20:38 np0005540825 nova_compute[256151]: 2025-12-01 10:20:38.831 256155 DEBUG oslo_concurrency.lockutils [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:20:38 np0005540825 nova_compute[256151]: 2025-12-01 10:20:38.832 256155 DEBUG oslo_concurrency.lockutils [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:20:38 np0005540825 nova_compute[256151]: 2025-12-01 10:20:38.842 256155 DEBUG nova.virt.hardware [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 05:20:38 np0005540825 nova_compute[256151]: 2025-12-01 10:20:38.843 256155 INFO nova.compute.claims [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 05:20:38 np0005540825 nova_compute[256151]: 2025-12-01 10:20:38.982 256155 DEBUG oslo_concurrency.processutils [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:20:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:20:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:20:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:20:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:39 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:20:39 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:20:39 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:20:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:20:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2403187449' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:20:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_10:20:39
Dec  1 05:20:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 05:20:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 05:20:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', '.nfs', 'vms', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', '.mgr', 'images']
Dec  1 05:20:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 05:20:39 np0005540825 nova_compute[256151]: 2025-12-01 10:20:39.504 256155 DEBUG oslo_concurrency.processutils [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:20:39 np0005540825 nova_compute[256151]: 2025-12-01 10:20:39.513 256155 DEBUG nova.compute.provider_tree [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Inventory has not changed in ProviderTree for provider: 5efe20fe-1981-4bd9-8786-d9fddc89a5ae update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 05:20:39 np0005540825 nova_compute[256151]: 2025-12-01 10:20:39.542 256155 DEBUG nova.scheduler.client.report [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Inventory has not changed for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 05:20:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:20:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:20:39 np0005540825 nova_compute[256151]: 2025-12-01 10:20:39.562 256155 DEBUG oslo_concurrency.lockutils [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:20:39 np0005540825 nova_compute[256151]: 2025-12-01 10:20:39.563 256155 DEBUG nova.compute.manager [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 05:20:39 np0005540825 nova_compute[256151]: 2025-12-01 10:20:39.613 256155 DEBUG nova.compute.manager [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 05:20:39 np0005540825 nova_compute[256151]: 2025-12-01 10:20:39.614 256155 DEBUG nova.network.neutron [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 05:20:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:20:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:20:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:20:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:20:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:20:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:20:39 np0005540825 nova_compute[256151]: 2025-12-01 10:20:39.633 256155 INFO nova.virt.libvirt.driver [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 05:20:39 np0005540825 nova_compute[256151]: 2025-12-01 10:20:39.652 256155 DEBUG nova.compute.manager [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 05:20:39 np0005540825 nova_compute[256151]: 2025-12-01 10:20:39.751 256155 DEBUG nova.compute.manager [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 05:20:39 np0005540825 nova_compute[256151]: 2025-12-01 10:20:39.753 256155 DEBUG nova.virt.libvirt.driver [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 05:20:39 np0005540825 nova_compute[256151]: 2025-12-01 10:20:39.754 256155 INFO nova.virt.libvirt.driver [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Creating image(s)#033[00m
Dec  1 05:20:39 np0005540825 nova_compute[256151]: 2025-12-01 10:20:39.797 256155 DEBUG nova.storage.rbd_utils [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] rbd image f38af490-c2f2-4870-a0c3-c676494aad55_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  1 05:20:39 np0005540825 nova_compute[256151]: 2025-12-01 10:20:39.845 256155 DEBUG nova.storage.rbd_utils [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] rbd image f38af490-c2f2-4870-a0c3-c676494aad55_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  1 05:20:39 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v901: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 2.0 KiB/s wr, 1 op/s
Dec  1 05:20:39 np0005540825 nova_compute[256151]: 2025-12-01 10:20:39.890 256155 DEBUG nova.storage.rbd_utils [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] rbd image f38af490-c2f2-4870-a0c3-c676494aad55_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  1 05:20:39 np0005540825 nova_compute[256151]: 2025-12-01 10:20:39.896 256155 DEBUG oslo_concurrency.processutils [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/caad95fa2cc8ed03bed2e9851744954b07ec7b34 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:20:39 np0005540825 nova_compute[256151]: 2025-12-01 10:20:39.927 256155 DEBUG nova.policy [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5b56a238daf0445798410e51caada0ff', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9f6be4e572624210b91193c011607c08', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  1 05:20:39 np0005540825 nova_compute[256151]: 2025-12-01 10:20:39.987 256155 DEBUG oslo_concurrency.processutils [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/caad95fa2cc8ed03bed2e9851744954b07ec7b34 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:20:39 np0005540825 nova_compute[256151]: 2025-12-01 10:20:39.989 256155 DEBUG oslo_concurrency.lockutils [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "caad95fa2cc8ed03bed2e9851744954b07ec7b34" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:20:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 05:20:39 np0005540825 nova_compute[256151]: 2025-12-01 10:20:39.990 256155 DEBUG oslo_concurrency.lockutils [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "caad95fa2cc8ed03bed2e9851744954b07ec7b34" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:20:39 np0005540825 nova_compute[256151]: 2025-12-01 10:20:39.991 256155 DEBUG oslo_concurrency.lockutils [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "caad95fa2cc8ed03bed2e9851744954b07ec7b34" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:20:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:20:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 05:20:39 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:20:39 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 05:20:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:20:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:20:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:20:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007593366420850427 of space, bias 1.0, pg target 0.22780099262551282 quantized to 32 (current 32)
Dec  1 05:20:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:20:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:20:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:20:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:20:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:20:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  1 05:20:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:20:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:20:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:20:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:20:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:20:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:20:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:20:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:20:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:20:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:20:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:20:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:20:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:20:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:20:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:20:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:20:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:20:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:20:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:20:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:20:40 np0005540825 nova_compute[256151]: 2025-12-01 10:20:40.040 256155 DEBUG nova.storage.rbd_utils [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] rbd image f38af490-c2f2-4870-a0c3-c676494aad55_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  1 05:20:40 np0005540825 nova_compute[256151]: 2025-12-01 10:20:40.046 256155 DEBUG oslo_concurrency.processutils [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/caad95fa2cc8ed03bed2e9851744954b07ec7b34 f38af490-c2f2-4870-a0c3-c676494aad55_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:20:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:20:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:20:40.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:20:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:20:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:20:40.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:20:40 np0005540825 nova_compute[256151]: 2025-12-01 10:20:40.380 256155 DEBUG oslo_concurrency.processutils [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/caad95fa2cc8ed03bed2e9851744954b07ec7b34 f38af490-c2f2-4870-a0c3-c676494aad55_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.335s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:20:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:20:40 np0005540825 nova_compute[256151]: 2025-12-01 10:20:40.499 256155 DEBUG nova.storage.rbd_utils [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] resizing rbd image f38af490-c2f2-4870-a0c3-c676494aad55_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  1 05:20:40 np0005540825 nova_compute[256151]: 2025-12-01 10:20:40.649 256155 DEBUG nova.network.neutron [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Successfully created port: e5d534a7-8e7b-4873-8258-5fac7c090568 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  1 05:20:40 np0005540825 nova_compute[256151]: 2025-12-01 10:20:40.662 256155 DEBUG nova.objects.instance [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lazy-loading 'migration_context' on Instance uuid f38af490-c2f2-4870-a0c3-c676494aad55 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 05:20:40 np0005540825 nova_compute[256151]: 2025-12-01 10:20:40.677 256155 DEBUG nova.virt.libvirt.driver [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 05:20:40 np0005540825 nova_compute[256151]: 2025-12-01 10:20:40.677 256155 DEBUG nova.virt.libvirt.driver [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Ensure instance console log exists: /var/lib/nova/instances/f38af490-c2f2-4870-a0c3-c676494aad55/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 05:20:40 np0005540825 nova_compute[256151]: 2025-12-01 10:20:40.678 256155 DEBUG oslo_concurrency.lockutils [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:20:40 np0005540825 nova_compute[256151]: 2025-12-01 10:20:40.679 256155 DEBUG oslo_concurrency.lockutils [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:20:40 np0005540825 nova_compute[256151]: 2025-12-01 10:20:40.679 256155 DEBUG oslo_concurrency.lockutils [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:20:41 np0005540825 nova_compute[256151]: 2025-12-01 10:20:41.164 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:20:41 np0005540825 nova_compute[256151]: 2025-12-01 10:20:41.296 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:20:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:20:41] "GET /metrics HTTP/1.1" 200 48556 "" "Prometheus/2.51.0"
Dec  1 05:20:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:20:41] "GET /metrics HTTP/1.1" 200 48556 "" "Prometheus/2.51.0"
Dec  1 05:20:41 np0005540825 nova_compute[256151]: 2025-12-01 10:20:41.779 256155 DEBUG nova.network.neutron [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Successfully updated port: e5d534a7-8e7b-4873-8258-5fac7c090568 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 05:20:41 np0005540825 nova_compute[256151]: 2025-12-01 10:20:41.795 256155 DEBUG oslo_concurrency.lockutils [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "refresh_cache-f38af490-c2f2-4870-a0c3-c676494aad55" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 05:20:41 np0005540825 nova_compute[256151]: 2025-12-01 10:20:41.795 256155 DEBUG oslo_concurrency.lockutils [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquired lock "refresh_cache-f38af490-c2f2-4870-a0c3-c676494aad55" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 05:20:41 np0005540825 nova_compute[256151]: 2025-12-01 10:20:41.795 256155 DEBUG nova.network.neutron [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 05:20:41 np0005540825 nova_compute[256151]: 2025-12-01 10:20:41.879 256155 DEBUG nova.compute.manager [req-9d87141e-18cc-4215-a9a0-ba443a3a421d req-0314d068-97a6-44e6-8e77-cad74be214ab dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Received event network-changed-e5d534a7-8e7b-4873-8258-5fac7c090568 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:20:41 np0005540825 nova_compute[256151]: 2025-12-01 10:20:41.880 256155 DEBUG nova.compute.manager [req-9d87141e-18cc-4215-a9a0-ba443a3a421d req-0314d068-97a6-44e6-8e77-cad74be214ab dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Refreshing instance network info cache due to event network-changed-e5d534a7-8e7b-4873-8258-5fac7c090568. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 05:20:41 np0005540825 nova_compute[256151]: 2025-12-01 10:20:41.880 256155 DEBUG oslo_concurrency.lockutils [req-9d87141e-18cc-4215-a9a0-ba443a3a421d req-0314d068-97a6-44e6-8e77-cad74be214ab dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "refresh_cache-f38af490-c2f2-4870-a0c3-c676494aad55" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 05:20:41 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v902: 353 pgs: 353 active+clean; 167 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec  1 05:20:41 np0005540825 nova_compute[256151]: 2025-12-01 10:20:41.939 256155 DEBUG nova.network.neutron [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 05:20:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:20:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:20:42.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:20:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:20:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:20:42.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:20:42 np0005540825 nova_compute[256151]: 2025-12-01 10:20:42.826 256155 DEBUG nova.network.neutron [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Updating instance_info_cache with network_info: [{"id": "e5d534a7-8e7b-4873-8258-5fac7c090568", "address": "fa:16:3e:85:c5:7f", "network": {"id": "0e5b3de9-56f5-4f4d-87c1-c01596567748", "bridge": "br-int", "label": "tempest-network-smoke--1903173849", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5d534a7-8e", "ovs_interfaceid": "e5d534a7-8e7b-4873-8258-5fac7c090568", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 05:20:42 np0005540825 nova_compute[256151]: 2025-12-01 10:20:42.850 256155 DEBUG oslo_concurrency.lockutils [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Releasing lock "refresh_cache-f38af490-c2f2-4870-a0c3-c676494aad55" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 05:20:42 np0005540825 nova_compute[256151]: 2025-12-01 10:20:42.851 256155 DEBUG nova.compute.manager [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Instance network_info: |[{"id": "e5d534a7-8e7b-4873-8258-5fac7c090568", "address": "fa:16:3e:85:c5:7f", "network": {"id": "0e5b3de9-56f5-4f4d-87c1-c01596567748", "bridge": "br-int", "label": "tempest-network-smoke--1903173849", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5d534a7-8e", "ovs_interfaceid": "e5d534a7-8e7b-4873-8258-5fac7c090568", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 05:20:42 np0005540825 nova_compute[256151]: 2025-12-01 10:20:42.852 256155 DEBUG oslo_concurrency.lockutils [req-9d87141e-18cc-4215-a9a0-ba443a3a421d req-0314d068-97a6-44e6-8e77-cad74be214ab dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquired lock "refresh_cache-f38af490-c2f2-4870-a0c3-c676494aad55" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 05:20:42 np0005540825 nova_compute[256151]: 2025-12-01 10:20:42.852 256155 DEBUG nova.network.neutron [req-9d87141e-18cc-4215-a9a0-ba443a3a421d req-0314d068-97a6-44e6-8e77-cad74be214ab dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Refreshing network info cache for port e5d534a7-8e7b-4873-8258-5fac7c090568 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 05:20:42 np0005540825 nova_compute[256151]: 2025-12-01 10:20:42.858 256155 DEBUG nova.virt.libvirt.driver [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Start _get_guest_xml network_info=[{"id": "e5d534a7-8e7b-4873-8258-5fac7c090568", "address": "fa:16:3e:85:c5:7f", "network": {"id": "0e5b3de9-56f5-4f4d-87c1-c01596567748", "bridge": "br-int", "label": "tempest-network-smoke--1903173849", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5d534a7-8e", "ovs_interfaceid": "e5d534a7-8e7b-4873-8258-5fac7c090568", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T10:14:19Z,direct_url=<?>,disk_format='qcow2',id=8f75d6de-6ce0-44e1-b417-d0111424475b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9a5734898a6345909986f17ddf57b27d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T10:14:22Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': '8f75d6de-6ce0-44e1-b417-d0111424475b'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 05:20:42 np0005540825 nova_compute[256151]: 2025-12-01 10:20:42.867 256155 WARNING nova.virt.libvirt.driver [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 05:20:42 np0005540825 nova_compute[256151]: 2025-12-01 10:20:42.873 256155 DEBUG nova.virt.libvirt.host [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 05:20:42 np0005540825 nova_compute[256151]: 2025-12-01 10:20:42.874 256155 DEBUG nova.virt.libvirt.host [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 05:20:42 np0005540825 nova_compute[256151]: 2025-12-01 10:20:42.886 256155 DEBUG nova.virt.libvirt.host [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 05:20:42 np0005540825 nova_compute[256151]: 2025-12-01 10:20:42.887 256155 DEBUG nova.virt.libvirt.host [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 05:20:42 np0005540825 nova_compute[256151]: 2025-12-01 10:20:42.887 256155 DEBUG nova.virt.libvirt.driver [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 05:20:42 np0005540825 nova_compute[256151]: 2025-12-01 10:20:42.888 256155 DEBUG nova.virt.hardware [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T10:14:17Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2e731827-1896-49cd-b0cc-12903555d217',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T10:14:19Z,direct_url=<?>,disk_format='qcow2',id=8f75d6de-6ce0-44e1-b417-d0111424475b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9a5734898a6345909986f17ddf57b27d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T10:14:22Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 05:20:42 np0005540825 nova_compute[256151]: 2025-12-01 10:20:42.889 256155 DEBUG nova.virt.hardware [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 05:20:42 np0005540825 nova_compute[256151]: 2025-12-01 10:20:42.889 256155 DEBUG nova.virt.hardware [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 05:20:42 np0005540825 nova_compute[256151]: 2025-12-01 10:20:42.890 256155 DEBUG nova.virt.hardware [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 05:20:42 np0005540825 nova_compute[256151]: 2025-12-01 10:20:42.890 256155 DEBUG nova.virt.hardware [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 05:20:42 np0005540825 nova_compute[256151]: 2025-12-01 10:20:42.891 256155 DEBUG nova.virt.hardware [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 05:20:42 np0005540825 nova_compute[256151]: 2025-12-01 10:20:42.891 256155 DEBUG nova.virt.hardware [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 05:20:42 np0005540825 nova_compute[256151]: 2025-12-01 10:20:42.892 256155 DEBUG nova.virt.hardware [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 05:20:42 np0005540825 nova_compute[256151]: 2025-12-01 10:20:42.892 256155 DEBUG nova.virt.hardware [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 05:20:42 np0005540825 nova_compute[256151]: 2025-12-01 10:20:42.893 256155 DEBUG nova.virt.hardware [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 05:20:42 np0005540825 nova_compute[256151]: 2025-12-01 10:20:42.893 256155 DEBUG nova.virt.hardware [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 05:20:42 np0005540825 nova_compute[256151]: 2025-12-01 10:20:42.898 256155 DEBUG oslo_concurrency.processutils [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:20:43 np0005540825 podman[271273]: 2025-12-01 10:20:43.289526574 +0000 UTC m=+0.149845406 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  1 05:20:43 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  1 05:20:43 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2540559284' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  1 05:20:43 np0005540825 nova_compute[256151]: 2025-12-01 10:20:43.418 256155 DEBUG oslo_concurrency.processutils [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:20:43 np0005540825 nova_compute[256151]: 2025-12-01 10:20:43.457 256155 DEBUG nova.storage.rbd_utils [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] rbd image f38af490-c2f2-4870-a0c3-c676494aad55_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  1 05:20:43 np0005540825 nova_compute[256151]: 2025-12-01 10:20:43.465 256155 DEBUG oslo_concurrency.processutils [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:20:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:20:43.672Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:20:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:20:43.672Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:20:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:20:43.673Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:20:43 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v903: 353 pgs: 353 active+clean; 167 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec  1 05:20:43 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  1 05:20:43 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/786623811' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  1 05:20:43 np0005540825 nova_compute[256151]: 2025-12-01 10:20:43.976 256155 DEBUG oslo_concurrency.processutils [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:20:43 np0005540825 nova_compute[256151]: 2025-12-01 10:20:43.979 256155 DEBUG nova.virt.libvirt.vif [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T10:20:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-639918817',display_name='tempest-TestNetworkBasicOps-server-639918817',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-639918817',id=7,image_ref='8f75d6de-6ce0-44e1-b417-d0111424475b',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHtnv9NpVKDTfH0AF7Ug60W/MxmIJo2CT7fYSCvCKLYl7NoVFoTmizifAIbXo2JZu5ZWoR0iRQ9Zn+lHLe3BED+b0i3R0WUHKDORFyNZe5Erfivryp4oxHPAOWYul9Ucbg==',key_name='tempest-TestNetworkBasicOps-1633426243',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9f6be4e572624210b91193c011607c08',ramdisk_id='',reservation_id='r-vzkf3nd0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8f75d6de-6ce0-44e1-b417-d0111424475b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1248115384',owner_user_name='tempest-TestNetworkBasicOps-1248115384-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T10:20:39Z,user_data=None,user_id='5b56a238daf0445798410e51caada0ff',uuid=f38af490-c2f2-4870-a0c3-c676494aad55,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e5d534a7-8e7b-4873-8258-5fac7c090568", "address": "fa:16:3e:85:c5:7f", "network": {"id": "0e5b3de9-56f5-4f4d-87c1-c01596567748", "bridge": "br-int", "label": "tempest-network-smoke--1903173849", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5d534a7-8e", "ovs_interfaceid": "e5d534a7-8e7b-4873-8258-5fac7c090568", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 05:20:43 np0005540825 nova_compute[256151]: 2025-12-01 10:20:43.980 256155 DEBUG nova.network.os_vif_util [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Converting VIF {"id": "e5d534a7-8e7b-4873-8258-5fac7c090568", "address": "fa:16:3e:85:c5:7f", "network": {"id": "0e5b3de9-56f5-4f4d-87c1-c01596567748", "bridge": "br-int", "label": "tempest-network-smoke--1903173849", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5d534a7-8e", "ovs_interfaceid": "e5d534a7-8e7b-4873-8258-5fac7c090568", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 05:20:43 np0005540825 nova_compute[256151]: 2025-12-01 10:20:43.981 256155 DEBUG nova.network.os_vif_util [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:85:c5:7f,bridge_name='br-int',has_traffic_filtering=True,id=e5d534a7-8e7b-4873-8258-5fac7c090568,network=Network(0e5b3de9-56f5-4f4d-87c1-c01596567748),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5d534a7-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 05:20:43 np0005540825 nova_compute[256151]: 2025-12-01 10:20:43.983 256155 DEBUG nova.objects.instance [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lazy-loading 'pci_devices' on Instance uuid f38af490-c2f2-4870-a0c3-c676494aad55 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 05:20:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:20:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:20:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:20:44 np0005540825 nova_compute[256151]: 2025-12-01 10:20:44.000 256155 DEBUG nova.virt.libvirt.driver [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] End _get_guest_xml xml=<domain type="kvm">
Dec  1 05:20:44 np0005540825 nova_compute[256151]:  <uuid>f38af490-c2f2-4870-a0c3-c676494aad55</uuid>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:  <name>instance-00000007</name>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:  <memory>131072</memory>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:  <vcpu>1</vcpu>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:  <metadata>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 05:20:44 np0005540825 nova_compute[256151]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:      <nova:name>tempest-TestNetworkBasicOps-server-639918817</nova:name>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:      <nova:creationTime>2025-12-01 10:20:42</nova:creationTime>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:      <nova:flavor name="m1.nano">
Dec  1 05:20:44 np0005540825 nova_compute[256151]:        <nova:memory>128</nova:memory>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:        <nova:disk>1</nova:disk>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:        <nova:swap>0</nova:swap>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:        <nova:ephemeral>0</nova:ephemeral>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:        <nova:vcpus>1</nova:vcpus>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:      </nova:flavor>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:      <nova:owner>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:        <nova:user uuid="5b56a238daf0445798410e51caada0ff">tempest-TestNetworkBasicOps-1248115384-project-member</nova:user>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:        <nova:project uuid="9f6be4e572624210b91193c011607c08">tempest-TestNetworkBasicOps-1248115384</nova:project>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:      </nova:owner>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:      <nova:root type="image" uuid="8f75d6de-6ce0-44e1-b417-d0111424475b"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:      <nova:ports>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:        <nova:port uuid="e5d534a7-8e7b-4873-8258-5fac7c090568">
Dec  1 05:20:44 np0005540825 nova_compute[256151]:          <nova:ip type="fixed" address="10.100.0.24" ipVersion="4"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:        </nova:port>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:      </nova:ports>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    </nova:instance>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:  </metadata>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:  <sysinfo type="smbios">
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <system>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:      <entry name="manufacturer">RDO</entry>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:      <entry name="product">OpenStack Compute</entry>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:      <entry name="serial">f38af490-c2f2-4870-a0c3-c676494aad55</entry>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:      <entry name="uuid">f38af490-c2f2-4870-a0c3-c676494aad55</entry>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:      <entry name="family">Virtual Machine</entry>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    </system>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:  </sysinfo>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:  <os>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <boot dev="hd"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <smbios mode="sysinfo"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:  </os>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:  <features>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <acpi/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <apic/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <vmcoreinfo/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:  </features>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:  <clock offset="utc">
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <timer name="hpet" present="no"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:  </clock>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:  <cpu mode="host-model" match="exact">
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:  </cpu>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:  <devices>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <disk type="network" device="disk">
Dec  1 05:20:44 np0005540825 nova_compute[256151]:      <driver type="raw" cache="none"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:      <source protocol="rbd" name="vms/f38af490-c2f2-4870-a0c3-c676494aad55_disk">
Dec  1 05:20:44 np0005540825 nova_compute[256151]:        <host name="192.168.122.100" port="6789"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:        <host name="192.168.122.102" port="6789"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:        <host name="192.168.122.101" port="6789"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:      </source>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:      <auth username="openstack">
Dec  1 05:20:44 np0005540825 nova_compute[256151]:        <secret type="ceph" uuid="365f19c2-81e5-5edd-b6b4-280555214d3a"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:      </auth>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:      <target dev="vda" bus="virtio"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    </disk>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <disk type="network" device="cdrom">
Dec  1 05:20:44 np0005540825 nova_compute[256151]:      <driver type="raw" cache="none"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:      <source protocol="rbd" name="vms/f38af490-c2f2-4870-a0c3-c676494aad55_disk.config">
Dec  1 05:20:44 np0005540825 nova_compute[256151]:        <host name="192.168.122.100" port="6789"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:        <host name="192.168.122.102" port="6789"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:        <host name="192.168.122.101" port="6789"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:      </source>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:      <auth username="openstack">
Dec  1 05:20:44 np0005540825 nova_compute[256151]:        <secret type="ceph" uuid="365f19c2-81e5-5edd-b6b4-280555214d3a"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:      </auth>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:      <target dev="sda" bus="sata"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    </disk>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <interface type="ethernet">
Dec  1 05:20:44 np0005540825 nova_compute[256151]:      <mac address="fa:16:3e:85:c5:7f"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:      <model type="virtio"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:      <mtu size="1442"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:      <target dev="tape5d534a7-8e"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    </interface>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <serial type="pty">
Dec  1 05:20:44 np0005540825 nova_compute[256151]:      <log file="/var/lib/nova/instances/f38af490-c2f2-4870-a0c3-c676494aad55/console.log" append="off"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    </serial>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <video>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:      <model type="virtio"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    </video>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <input type="tablet" bus="usb"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <rng model="virtio">
Dec  1 05:20:44 np0005540825 nova_compute[256151]:      <backend model="random">/dev/urandom</backend>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    </rng>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <controller type="usb" index="0"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    <memballoon model="virtio">
Dec  1 05:20:44 np0005540825 nova_compute[256151]:      <stats period="10"/>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:    </memballoon>
Dec  1 05:20:44 np0005540825 nova_compute[256151]:  </devices>
Dec  1 05:20:44 np0005540825 nova_compute[256151]: </domain>
Dec  1 05:20:44 np0005540825 nova_compute[256151]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 05:20:44 np0005540825 nova_compute[256151]: 2025-12-01 10:20:44.003 256155 DEBUG nova.compute.manager [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Preparing to wait for external event network-vif-plugged-e5d534a7-8e7b-4873-8258-5fac7c090568 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 05:20:44 np0005540825 nova_compute[256151]: 2025-12-01 10:20:44.003 256155 DEBUG oslo_concurrency.lockutils [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "f38af490-c2f2-4870-a0c3-c676494aad55-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:20:44 np0005540825 nova_compute[256151]: 2025-12-01 10:20:44.004 256155 DEBUG oslo_concurrency.lockutils [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "f38af490-c2f2-4870-a0c3-c676494aad55-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:20:44 np0005540825 nova_compute[256151]: 2025-12-01 10:20:44.004 256155 DEBUG oslo_concurrency.lockutils [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "f38af490-c2f2-4870-a0c3-c676494aad55-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:20:44 np0005540825 nova_compute[256151]: 2025-12-01 10:20:44.005 256155 DEBUG nova.virt.libvirt.vif [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T10:20:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-639918817',display_name='tempest-TestNetworkBasicOps-server-639918817',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-639918817',id=7,image_ref='8f75d6de-6ce0-44e1-b417-d0111424475b',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHtnv9NpVKDTfH0AF7Ug60W/MxmIJo2CT7fYSCvCKLYl7NoVFoTmizifAIbXo2JZu5ZWoR0iRQ9Zn+lHLe3BED+b0i3R0WUHKDORFyNZe5Erfivryp4oxHPAOWYul9Ucbg==',key_name='tempest-TestNetworkBasicOps-1633426243',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9f6be4e572624210b91193c011607c08',ramdisk_id='',reservation_id='r-vzkf3nd0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8f75d6de-6ce0-44e1-b417-d0111424475b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1248115384',owner_user_name='tempest-TestNetworkBasicOps-1248115384-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T10:20:39Z,user_data=None,user_id='5b56a238daf0445798410e51caada0ff',uuid=f38af490-c2f2-4870-a0c3-c676494aad55,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e5d534a7-8e7b-4873-8258-5fac7c090568", "address": "fa:16:3e:85:c5:7f", "network": {"id": "0e5b3de9-56f5-4f4d-87c1-c01596567748", "bridge": "br-int", "label": "tempest-network-smoke--1903173849", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5d534a7-8e", "ovs_interfaceid": "e5d534a7-8e7b-4873-8258-5fac7c090568", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 05:20:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:44 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:20:44 np0005540825 nova_compute[256151]: 2025-12-01 10:20:44.006 256155 DEBUG nova.network.os_vif_util [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Converting VIF {"id": "e5d534a7-8e7b-4873-8258-5fac7c090568", "address": "fa:16:3e:85:c5:7f", "network": {"id": "0e5b3de9-56f5-4f4d-87c1-c01596567748", "bridge": "br-int", "label": "tempest-network-smoke--1903173849", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5d534a7-8e", "ovs_interfaceid": "e5d534a7-8e7b-4873-8258-5fac7c090568", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 05:20:44 np0005540825 nova_compute[256151]: 2025-12-01 10:20:44.007 256155 DEBUG nova.network.os_vif_util [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:85:c5:7f,bridge_name='br-int',has_traffic_filtering=True,id=e5d534a7-8e7b-4873-8258-5fac7c090568,network=Network(0e5b3de9-56f5-4f4d-87c1-c01596567748),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5d534a7-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 05:20:44 np0005540825 nova_compute[256151]: 2025-12-01 10:20:44.008 256155 DEBUG os_vif [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:c5:7f,bridge_name='br-int',has_traffic_filtering=True,id=e5d534a7-8e7b-4873-8258-5fac7c090568,network=Network(0e5b3de9-56f5-4f4d-87c1-c01596567748),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5d534a7-8e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 05:20:44 np0005540825 nova_compute[256151]: 2025-12-01 10:20:44.009 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:20:44 np0005540825 nova_compute[256151]: 2025-12-01 10:20:44.010 256155 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:20:44 np0005540825 nova_compute[256151]: 2025-12-01 10:20:44.011 256155 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 05:20:44 np0005540825 nova_compute[256151]: 2025-12-01 10:20:44.016 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:20:44 np0005540825 nova_compute[256151]: 2025-12-01 10:20:44.016 256155 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape5d534a7-8e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:20:44 np0005540825 nova_compute[256151]: 2025-12-01 10:20:44.017 256155 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape5d534a7-8e, col_values=(('external_ids', {'iface-id': 'e5d534a7-8e7b-4873-8258-5fac7c090568', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:85:c5:7f', 'vm-uuid': 'f38af490-c2f2-4870-a0c3-c676494aad55'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:20:44 np0005540825 nova_compute[256151]: 2025-12-01 10:20:44.019 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:20:44 np0005540825 NetworkManager[48963]: <info>  [1764584444.0208] manager: (tape5d534a7-8e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Dec  1 05:20:44 np0005540825 nova_compute[256151]: 2025-12-01 10:20:44.022 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 05:20:44 np0005540825 nova_compute[256151]: 2025-12-01 10:20:44.027 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:20:44 np0005540825 nova_compute[256151]: 2025-12-01 10:20:44.028 256155 INFO os_vif [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:c5:7f,bridge_name='br-int',has_traffic_filtering=True,id=e5d534a7-8e7b-4873-8258-5fac7c090568,network=Network(0e5b3de9-56f5-4f4d-87c1-c01596567748),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5d534a7-8e')#033[00m
Dec  1 05:20:44 np0005540825 nova_compute[256151]: 2025-12-01 10:20:44.034 256155 DEBUG nova.network.neutron [req-9d87141e-18cc-4215-a9a0-ba443a3a421d req-0314d068-97a6-44e6-8e77-cad74be214ab dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Updated VIF entry in instance network info cache for port e5d534a7-8e7b-4873-8258-5fac7c090568. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 05:20:44 np0005540825 nova_compute[256151]: 2025-12-01 10:20:44.034 256155 DEBUG nova.network.neutron [req-9d87141e-18cc-4215-a9a0-ba443a3a421d req-0314d068-97a6-44e6-8e77-cad74be214ab dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Updating instance_info_cache with network_info: [{"id": "e5d534a7-8e7b-4873-8258-5fac7c090568", "address": "fa:16:3e:85:c5:7f", "network": {"id": "0e5b3de9-56f5-4f4d-87c1-c01596567748", "bridge": "br-int", "label": "tempest-network-smoke--1903173849", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5d534a7-8e", "ovs_interfaceid": "e5d534a7-8e7b-4873-8258-5fac7c090568", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 05:20:44 np0005540825 nova_compute[256151]: 2025-12-01 10:20:44.052 256155 DEBUG oslo_concurrency.lockutils [req-9d87141e-18cc-4215-a9a0-ba443a3a421d req-0314d068-97a6-44e6-8e77-cad74be214ab dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Releasing lock "refresh_cache-f38af490-c2f2-4870-a0c3-c676494aad55" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 05:20:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:20:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:20:44.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:20:44 np0005540825 nova_compute[256151]: 2025-12-01 10:20:44.090 256155 DEBUG nova.virt.libvirt.driver [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 05:20:44 np0005540825 nova_compute[256151]: 2025-12-01 10:20:44.091 256155 DEBUG nova.virt.libvirt.driver [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 05:20:44 np0005540825 nova_compute[256151]: 2025-12-01 10:20:44.091 256155 DEBUG nova.virt.libvirt.driver [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] No VIF found with MAC fa:16:3e:85:c5:7f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 05:20:44 np0005540825 nova_compute[256151]: 2025-12-01 10:20:44.092 256155 INFO nova.virt.libvirt.driver [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Using config drive#033[00m
Dec  1 05:20:44 np0005540825 nova_compute[256151]: 2025-12-01 10:20:44.126 256155 DEBUG nova.storage.rbd_utils [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] rbd image f38af490-c2f2-4870-a0c3-c676494aad55_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  1 05:20:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:20:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:20:44.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:20:44 np0005540825 nova_compute[256151]: 2025-12-01 10:20:44.484 256155 INFO nova.virt.libvirt.driver [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Creating config drive at /var/lib/nova/instances/f38af490-c2f2-4870-a0c3-c676494aad55/disk.config#033[00m
Dec  1 05:20:44 np0005540825 nova_compute[256151]: 2025-12-01 10:20:44.492 256155 DEBUG oslo_concurrency.processutils [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f38af490-c2f2-4870-a0c3-c676494aad55/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpspond1rw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:20:44 np0005540825 nova_compute[256151]: 2025-12-01 10:20:44.623 256155 DEBUG oslo_concurrency.processutils [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f38af490-c2f2-4870-a0c3-c676494aad55/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpspond1rw" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:20:44 np0005540825 nova_compute[256151]: 2025-12-01 10:20:44.662 256155 DEBUG nova.storage.rbd_utils [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] rbd image f38af490-c2f2-4870-a0c3-c676494aad55_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  1 05:20:44 np0005540825 nova_compute[256151]: 2025-12-01 10:20:44.666 256155 DEBUG oslo_concurrency.processutils [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f38af490-c2f2-4870-a0c3-c676494aad55/disk.config f38af490-c2f2-4870-a0c3-c676494aad55_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:20:44 np0005540825 nova_compute[256151]: 2025-12-01 10:20:44.864 256155 DEBUG oslo_concurrency.processutils [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f38af490-c2f2-4870-a0c3-c676494aad55/disk.config f38af490-c2f2-4870-a0c3-c676494aad55_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:20:44 np0005540825 nova_compute[256151]: 2025-12-01 10:20:44.865 256155 INFO nova.virt.libvirt.driver [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Deleting local config drive /var/lib/nova/instances/f38af490-c2f2-4870-a0c3-c676494aad55/disk.config because it was imported into RBD.#033[00m
Dec  1 05:20:44 np0005540825 kernel: tape5d534a7-8e: entered promiscuous mode
Dec  1 05:20:44 np0005540825 NetworkManager[48963]: <info>  [1764584444.9438] manager: (tape5d534a7-8e): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Dec  1 05:20:44 np0005540825 ovn_controller[153404]: 2025-12-01T10:20:44Z|00050|binding|INFO|Claiming lport e5d534a7-8e7b-4873-8258-5fac7c090568 for this chassis.
Dec  1 05:20:44 np0005540825 nova_compute[256151]: 2025-12-01 10:20:44.962 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:20:44 np0005540825 ovn_controller[153404]: 2025-12-01T10:20:44Z|00051|binding|INFO|e5d534a7-8e7b-4873-8258-5fac7c090568: Claiming fa:16:3e:85:c5:7f 10.100.0.24
Dec  1 05:20:44 np0005540825 nova_compute[256151]: 2025-12-01 10:20:44.972 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:20:44 np0005540825 systemd-machined[216307]: New machine qemu-3-instance-00000007.
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:20:45.002 163291 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:c5:7f 10.100.0.24'], port_security=['fa:16:3e:85:c5:7f 10.100.0.24'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.24/28', 'neutron:device_id': 'f38af490-c2f2-4870-a0c3-c676494aad55', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e5b3de9-56f5-4f4d-87c1-c01596567748', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f6be4e572624210b91193c011607c08', 'neutron:revision_number': '2', 'neutron:security_group_ids': '94f4da7f-2053-479c-a6f3-8ad7a572c1e8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26afc505-4eba-4dd1-91d0-5142d49a2356, chassis=[<ovs.db.idl.Row object at 0x7f3429b436d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3429b436d0>], logical_port=e5d534a7-8e7b-4873-8258-5fac7c090568) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:20:45.004 163291 INFO neutron.agent.ovn.metadata.agent [-] Port e5d534a7-8e7b-4873-8258-5fac7c090568 in datapath 0e5b3de9-56f5-4f4d-87c1-c01596567748 bound to our chassis#033[00m
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:20:45.005 163291 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0e5b3de9-56f5-4f4d-87c1-c01596567748#033[00m
Dec  1 05:20:45 np0005540825 systemd[1]: Started Virtual Machine qemu-3-instance-00000007.
Dec  1 05:20:45 np0005540825 ovn_controller[153404]: 2025-12-01T10:20:45Z|00052|binding|INFO|Setting lport e5d534a7-8e7b-4873-8258-5fac7c090568 ovn-installed in OVS
Dec  1 05:20:45 np0005540825 ovn_controller[153404]: 2025-12-01T10:20:45Z|00053|binding|INFO|Setting lport e5d534a7-8e7b-4873-8258-5fac7c090568 up in Southbound
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:20:45.022 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[e290ed7d-ed91-4baa-9de8-0b02a1931030]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:20:45 np0005540825 nova_compute[256151]: 2025-12-01 10:20:45.023 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:20:45.024 163291 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0e5b3de9-51 in ovnmeta-0e5b3de9-56f5-4f4d-87c1-c01596567748 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:20:45.026 262668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0e5b3de9-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:20:45.026 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[83d40049-a120-4240-bcd7-01628d6fb422]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:20:45.028 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[27888731-edce-459d-9d24-8a1ed86296c7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:20:45 np0005540825 systemd-udevd[271416]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 05:20:45 np0005540825 NetworkManager[48963]: <info>  [1764584445.0470] device (tape5d534a7-8e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 05:20:45 np0005540825 NetworkManager[48963]: <info>  [1764584445.0488] device (tape5d534a7-8e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:20:45.052 163408 DEBUG oslo.privsep.daemon [-] privsep: reply[8d6c316e-4cc5-4aaf-bce0-4f4bd988fed7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:20:45.076 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[a303c765-b065-4cd9-8930-eb039fc89929]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:20:45.116 262728 DEBUG oslo.privsep.daemon [-] privsep: reply[0dbec112-e245-4624-9ffc-0bc84a154e2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:20:45.122 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[9031159c-47df-4af0-82f4-547aae2b9780]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:20:45 np0005540825 NetworkManager[48963]: <info>  [1764584445.1241] manager: (tap0e5b3de9-50): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:20:45.166 262728 DEBUG oslo.privsep.daemon [-] privsep: reply[4445e4e2-cf25-46a0-bbd9-650be0c0aee9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:20:45.170 262728 DEBUG oslo.privsep.daemon [-] privsep: reply[860d489f-c43a-46fa-843a-9ea93b81cf97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:20:45 np0005540825 NetworkManager[48963]: <info>  [1764584445.2007] device (tap0e5b3de9-50): carrier: link connected
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:20:45.211 262728 DEBUG oslo.privsep.daemon [-] privsep: reply[cf1e4dc1-67dc-4764-8f6d-29bb02c64998]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:20:45.224 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[168a6e05-50e4-44a7-9aa5-23e4dab37162]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0e5b3de9-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:1d:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 431383, 'reachable_time': 37378, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271473, 'error': None, 'target': 'ovnmeta-0e5b3de9-56f5-4f4d-87c1-c01596567748', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:20:45.243 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[96e0464f-633e-4197-995d-5515c5bed6d8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe14:1dc4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 431383, 'tstamp': 431383}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271474, 'error': None, 'target': 'ovnmeta-0e5b3de9-56f5-4f4d-87c1-c01596567748', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:20:45.258 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[d9375b13-d8d3-4f2f-b7af-a7a17083bbd1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0e5b3de9-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:1d:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 431383, 'reachable_time': 37378, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271475, 'error': None, 'target': 'ovnmeta-0e5b3de9-56f5-4f4d-87c1-c01596567748', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:20:45.298 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[e18f6603-e30e-40f9-9e7b-f68eb034a4ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:20:45.367 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[bfb71203-7a8c-44e1-add1-96d1823cebdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:20:45.373 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0e5b3de9-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:20:45.374 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:20:45.375 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0e5b3de9-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:20:45 np0005540825 nova_compute[256151]: 2025-12-01 10:20:45.378 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:20:45 np0005540825 NetworkManager[48963]: <info>  [1764584445.3790] manager: (tap0e5b3de9-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Dec  1 05:20:45 np0005540825 kernel: tap0e5b3de9-50: entered promiscuous mode
Dec  1 05:20:45 np0005540825 nova_compute[256151]: 2025-12-01 10:20:45.380 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:20:45.381 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0e5b3de9-50, col_values=(('external_ids', {'iface-id': '9d1b1966-b85d-4c48-9bc3-59ebb92f0fa7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:20:45 np0005540825 nova_compute[256151]: 2025-12-01 10:20:45.382 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:20:45 np0005540825 ovn_controller[153404]: 2025-12-01T10:20:45Z|00054|binding|INFO|Releasing lport 9d1b1966-b85d-4c48-9bc3-59ebb92f0fa7 from this chassis (sb_readonly=0)
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:20:45.384 163291 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0e5b3de9-56f5-4f4d-87c1-c01596567748.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0e5b3de9-56f5-4f4d-87c1-c01596567748.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:20:45.385 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[3e0da112-16e7-465f-a308-71494af3d84e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:20:45.385 163291 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]: global
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]:    log         /dev/log local0 debug
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]:    log-tag     haproxy-metadata-proxy-0e5b3de9-56f5-4f4d-87c1-c01596567748
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]:    user        root
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]:    group       root
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]:    maxconn     1024
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]:    pidfile     /var/lib/neutron/external/pids/0e5b3de9-56f5-4f4d-87c1-c01596567748.pid.haproxy
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]:    daemon
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]: 
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]: defaults
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]:    log global
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]:    mode http
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]:    option httplog
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]:    option dontlognull
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]:    option http-server-close
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]:    option forwardfor
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]:    retries                 3
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]:    timeout http-request    30s
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]:    timeout connect         30s
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]:    timeout client          32s
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]:    timeout server          32s
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]:    timeout http-keep-alive 30s
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]: 
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]: 
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]: listen listener
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]:    bind 169.254.169.254:80
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]:    server metadata /var/lib/neutron/metadata_proxy
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]:    http-request add-header X-OVN-Network-ID 0e5b3de9-56f5-4f4d-87c1-c01596567748
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  1 05:20:45 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:20:45.386 163291 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0e5b3de9-56f5-4f4d-87c1-c01596567748', 'env', 'PROCESS_TAG=haproxy-0e5b3de9-56f5-4f4d-87c1-c01596567748', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0e5b3de9-56f5-4f4d-87c1-c01596567748.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  1 05:20:45 np0005540825 nova_compute[256151]: 2025-12-01 10:20:45.398 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:20:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:20:45 np0005540825 nova_compute[256151]: 2025-12-01 10:20:45.474 256155 DEBUG nova.virt.driver [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] Emitting event <LifecycleEvent: 1764584445.4742384, f38af490-c2f2-4870-a0c3-c676494aad55 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 05:20:45 np0005540825 nova_compute[256151]: 2025-12-01 10:20:45.475 256155 INFO nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] VM Started (Lifecycle Event)#033[00m
Dec  1 05:20:45 np0005540825 nova_compute[256151]: 2025-12-01 10:20:45.497 256155 DEBUG nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 05:20:45 np0005540825 nova_compute[256151]: 2025-12-01 10:20:45.503 256155 DEBUG nova.virt.driver [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] Emitting event <LifecycleEvent: 1764584445.4744663, f38af490-c2f2-4870-a0c3-c676494aad55 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 05:20:45 np0005540825 nova_compute[256151]: 2025-12-01 10:20:45.503 256155 INFO nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] VM Paused (Lifecycle Event)#033[00m
Dec  1 05:20:45 np0005540825 nova_compute[256151]: 2025-12-01 10:20:45.525 256155 DEBUG nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 05:20:45 np0005540825 nova_compute[256151]: 2025-12-01 10:20:45.529 256155 DEBUG nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 05:20:45 np0005540825 nova_compute[256151]: 2025-12-01 10:20:45.550 256155 INFO nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 05:20:45 np0005540825 nova_compute[256151]: 2025-12-01 10:20:45.718 256155 DEBUG nova.compute.manager [req-8222e5a5-fa11-427d-854d-c4210b243845 req-e245dbc6-c29f-4881-8003-f0b74448877d dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Received event network-vif-plugged-e5d534a7-8e7b-4873-8258-5fac7c090568 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:20:45 np0005540825 nova_compute[256151]: 2025-12-01 10:20:45.718 256155 DEBUG oslo_concurrency.lockutils [req-8222e5a5-fa11-427d-854d-c4210b243845 req-e245dbc6-c29f-4881-8003-f0b74448877d dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "f38af490-c2f2-4870-a0c3-c676494aad55-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:20:45 np0005540825 nova_compute[256151]: 2025-12-01 10:20:45.719 256155 DEBUG oslo_concurrency.lockutils [req-8222e5a5-fa11-427d-854d-c4210b243845 req-e245dbc6-c29f-4881-8003-f0b74448877d dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "f38af490-c2f2-4870-a0c3-c676494aad55-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:20:45 np0005540825 nova_compute[256151]: 2025-12-01 10:20:45.719 256155 DEBUG oslo_concurrency.lockutils [req-8222e5a5-fa11-427d-854d-c4210b243845 req-e245dbc6-c29f-4881-8003-f0b74448877d dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "f38af490-c2f2-4870-a0c3-c676494aad55-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:20:45 np0005540825 nova_compute[256151]: 2025-12-01 10:20:45.720 256155 DEBUG nova.compute.manager [req-8222e5a5-fa11-427d-854d-c4210b243845 req-e245dbc6-c29f-4881-8003-f0b74448877d dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Processing event network-vif-plugged-e5d534a7-8e7b-4873-8258-5fac7c090568 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 05:20:45 np0005540825 nova_compute[256151]: 2025-12-01 10:20:45.722 256155 DEBUG nova.compute.manager [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 05:20:45 np0005540825 nova_compute[256151]: 2025-12-01 10:20:45.726 256155 DEBUG nova.virt.driver [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] Emitting event <LifecycleEvent: 1764584445.726236, f38af490-c2f2-4870-a0c3-c676494aad55 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 05:20:45 np0005540825 nova_compute[256151]: 2025-12-01 10:20:45.727 256155 INFO nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] VM Resumed (Lifecycle Event)#033[00m
Dec  1 05:20:45 np0005540825 nova_compute[256151]: 2025-12-01 10:20:45.730 256155 DEBUG nova.virt.libvirt.driver [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 05:20:45 np0005540825 nova_compute[256151]: 2025-12-01 10:20:45.736 256155 INFO nova.virt.libvirt.driver [-] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Instance spawned successfully.#033[00m
Dec  1 05:20:45 np0005540825 nova_compute[256151]: 2025-12-01 10:20:45.737 256155 DEBUG nova.virt.libvirt.driver [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 05:20:45 np0005540825 nova_compute[256151]: 2025-12-01 10:20:45.792 256155 DEBUG nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 05:20:45 np0005540825 nova_compute[256151]: 2025-12-01 10:20:45.797 256155 DEBUG nova.virt.libvirt.driver [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 05:20:45 np0005540825 nova_compute[256151]: 2025-12-01 10:20:45.798 256155 DEBUG nova.virt.libvirt.driver [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 05:20:45 np0005540825 nova_compute[256151]: 2025-12-01 10:20:45.798 256155 DEBUG nova.virt.libvirt.driver [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 05:20:45 np0005540825 nova_compute[256151]: 2025-12-01 10:20:45.799 256155 DEBUG nova.virt.libvirt.driver [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 05:20:45 np0005540825 nova_compute[256151]: 2025-12-01 10:20:45.799 256155 DEBUG nova.virt.libvirt.driver [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 05:20:45 np0005540825 nova_compute[256151]: 2025-12-01 10:20:45.800 256155 DEBUG nova.virt.libvirt.driver [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 05:20:45 np0005540825 nova_compute[256151]: 2025-12-01 10:20:45.806 256155 DEBUG nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 05:20:45 np0005540825 podman[271551]: 2025-12-01 10:20:45.821800688 +0000 UTC m=+0.080574071 container create 27f1ef040fba6798c8a5aac2225c1240c4a1fc1e022f891a2d16042bd2730860 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0e5b3de9-56f5-4f4d-87c1-c01596567748, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec  1 05:20:45 np0005540825 systemd[1]: Started libpod-conmon-27f1ef040fba6798c8a5aac2225c1240c4a1fc1e022f891a2d16042bd2730860.scope.
Dec  1 05:20:45 np0005540825 podman[271551]: 2025-12-01 10:20:45.780546263 +0000 UTC m=+0.039319726 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 05:20:45 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v904: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Dec  1 05:20:45 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:20:45 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3777ef8f94a0385a4e6cb0dd904f418748ac565a1093fa51e7f3ed76a8067db/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  1 05:20:45 np0005540825 nova_compute[256151]: 2025-12-01 10:20:45.916 256155 INFO nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 05:20:45 np0005540825 podman[271551]: 2025-12-01 10:20:45.927104282 +0000 UTC m=+0.185877715 container init 27f1ef040fba6798c8a5aac2225c1240c4a1fc1e022f891a2d16042bd2730860 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0e5b3de9-56f5-4f4d-87c1-c01596567748, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  1 05:20:45 np0005540825 podman[271551]: 2025-12-01 10:20:45.934482784 +0000 UTC m=+0.193256187 container start 27f1ef040fba6798c8a5aac2225c1240c4a1fc1e022f891a2d16042bd2730860 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0e5b3de9-56f5-4f4d-87c1-c01596567748, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec  1 05:20:45 np0005540825 nova_compute[256151]: 2025-12-01 10:20:45.938 256155 INFO nova.compute.manager [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Took 6.19 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 05:20:45 np0005540825 nova_compute[256151]: 2025-12-01 10:20:45.939 256155 DEBUG nova.compute.manager [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 05:20:45 np0005540825 neutron-haproxy-ovnmeta-0e5b3de9-56f5-4f4d-87c1-c01596567748[271567]: [NOTICE]   (271571) : New worker (271573) forked
Dec  1 05:20:45 np0005540825 neutron-haproxy-ovnmeta-0e5b3de9-56f5-4f4d-87c1-c01596567748[271567]: [NOTICE]   (271571) : Loading success.
Dec  1 05:20:46 np0005540825 nova_compute[256151]: 2025-12-01 10:20:46.053 256155 INFO nova.compute.manager [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Took 7.26 seconds to build instance.#033[00m
Dec  1 05:20:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:20:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:20:46.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:20:46 np0005540825 nova_compute[256151]: 2025-12-01 10:20:46.127 256155 DEBUG oslo_concurrency.lockutils [None req-1b0fa522-ccfd-426c-917a-0d5cd8e563fe 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "f38af490-c2f2-4870-a0c3-c676494aad55" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.400s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:20:46 np0005540825 nova_compute[256151]: 2025-12-01 10:20:46.196 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:20:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:20:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:20:46.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:20:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:20:47.230Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:20:47 np0005540825 nova_compute[256151]: 2025-12-01 10:20:47.799 256155 DEBUG nova.compute.manager [req-faf4e565-4e98-4af3-9caa-b87b7c5e766a req-55952e55-6608-4342-9870-7af003e8cde9 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Received event network-vif-plugged-e5d534a7-8e7b-4873-8258-5fac7c090568 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:20:47 np0005540825 nova_compute[256151]: 2025-12-01 10:20:47.799 256155 DEBUG oslo_concurrency.lockutils [req-faf4e565-4e98-4af3-9caa-b87b7c5e766a req-55952e55-6608-4342-9870-7af003e8cde9 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "f38af490-c2f2-4870-a0c3-c676494aad55-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:20:47 np0005540825 nova_compute[256151]: 2025-12-01 10:20:47.800 256155 DEBUG oslo_concurrency.lockutils [req-faf4e565-4e98-4af3-9caa-b87b7c5e766a req-55952e55-6608-4342-9870-7af003e8cde9 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "f38af490-c2f2-4870-a0c3-c676494aad55-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:20:47 np0005540825 nova_compute[256151]: 2025-12-01 10:20:47.800 256155 DEBUG oslo_concurrency.lockutils [req-faf4e565-4e98-4af3-9caa-b87b7c5e766a req-55952e55-6608-4342-9870-7af003e8cde9 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "f38af490-c2f2-4870-a0c3-c676494aad55-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:20:47 np0005540825 nova_compute[256151]: 2025-12-01 10:20:47.801 256155 DEBUG nova.compute.manager [req-faf4e565-4e98-4af3-9caa-b87b7c5e766a req-55952e55-6608-4342-9870-7af003e8cde9 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] No waiting events found dispatching network-vif-plugged-e5d534a7-8e7b-4873-8258-5fac7c090568 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 05:20:47 np0005540825 nova_compute[256151]: 2025-12-01 10:20:47.801 256155 WARNING nova.compute.manager [req-faf4e565-4e98-4af3-9caa-b87b7c5e766a req-55952e55-6608-4342-9870-7af003e8cde9 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Received unexpected event network-vif-plugged-e5d534a7-8e7b-4873-8258-5fac7c090568 for instance with vm_state active and task_state None.#033[00m
Dec  1 05:20:47 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v905: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Dec  1 05:20:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000025s ======
Dec  1 05:20:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:20:48.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  1 05:20:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:20:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:20:48.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:20:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:20:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:20:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:20:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:49 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:20:49 np0005540825 nova_compute[256151]: 2025-12-01 10:20:49.019 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:20:49 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v906: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Dec  1 05:20:50 np0005540825 nova_compute[256151]: 2025-12-01 10:20:50.052 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:20:50 np0005540825 nova_compute[256151]: 2025-12-01 10:20:50.053 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 05:20:50 np0005540825 nova_compute[256151]: 2025-12-01 10:20:50.053 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 05:20:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:20:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:20:50.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:20:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:20:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:20:50.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:20:50 np0005540825 nova_compute[256151]: 2025-12-01 10:20:50.253 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "refresh_cache-f38af490-c2f2-4870-a0c3-c676494aad55" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 05:20:50 np0005540825 nova_compute[256151]: 2025-12-01 10:20:50.254 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquired lock "refresh_cache-f38af490-c2f2-4870-a0c3-c676494aad55" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 05:20:50 np0005540825 nova_compute[256151]: 2025-12-01 10:20:50.254 256155 DEBUG nova.network.neutron [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 05:20:50 np0005540825 nova_compute[256151]: 2025-12-01 10:20:50.255 256155 DEBUG nova.objects.instance [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f38af490-c2f2-4870-a0c3-c676494aad55 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 05:20:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:20:51 np0005540825 nova_compute[256151]: 2025-12-01 10:20:51.233 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:20:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:20:51] "GET /metrics HTTP/1.1" 200 48556 "" "Prometheus/2.51.0"
Dec  1 05:20:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:20:51] "GET /metrics HTTP/1.1" 200 48556 "" "Prometheus/2.51.0"
Dec  1 05:20:51 np0005540825 nova_compute[256151]: 2025-12-01 10:20:51.796 256155 DEBUG nova.network.neutron [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Updating instance_info_cache with network_info: [{"id": "e5d534a7-8e7b-4873-8258-5fac7c090568", "address": "fa:16:3e:85:c5:7f", "network": {"id": "0e5b3de9-56f5-4f4d-87c1-c01596567748", "bridge": "br-int", "label": "tempest-network-smoke--1903173849", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5d534a7-8e", "ovs_interfaceid": "e5d534a7-8e7b-4873-8258-5fac7c090568", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 05:20:51 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v907: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Dec  1 05:20:51 np0005540825 nova_compute[256151]: 2025-12-01 10:20:51.998 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Releasing lock "refresh_cache-f38af490-c2f2-4870-a0c3-c676494aad55" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 05:20:51 np0005540825 nova_compute[256151]: 2025-12-01 10:20:51.999 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 05:20:52 np0005540825 nova_compute[256151]: 2025-12-01 10:20:52.026 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:20:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:20:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:20:52.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:20:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:20:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:20:52.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:20:53 np0005540825 nova_compute[256151]: 2025-12-01 10:20:53.022 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:20:53 np0005540825 nova_compute[256151]: 2025-12-01 10:20:53.026 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:20:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:20:53.674Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:20:53 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v908: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 75 op/s
Dec  1 05:20:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:20:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:20:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:20:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:54 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:20:54 np0005540825 nova_compute[256151]: 2025-12-01 10:20:54.022 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:20:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:20:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:20:54.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:20:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:20:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:20:54.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:20:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:20:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:20:55 np0005540825 nova_compute[256151]: 2025-12-01 10:20:55.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:20:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:20:55 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v909: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 75 op/s
Dec  1 05:20:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:20:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:20:56.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:20:56 np0005540825 nova_compute[256151]: 2025-12-01 10:20:56.236 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:20:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:20:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:20:56.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:20:57 np0005540825 nova_compute[256151]: 2025-12-01 10:20:57.026 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:20:57 np0005540825 nova_compute[256151]: 2025-12-01 10:20:57.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:20:57 np0005540825 nova_compute[256151]: 2025-12-01 10:20:57.027 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 05:20:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:20:57.232Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:20:57 np0005540825 radosgw[94538]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Dec  1 05:20:57 np0005540825 radosgw[94538]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Dec  1 05:20:57 np0005540825 radosgw[94538]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Dec  1 05:20:57 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v910: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.3 KiB/s wr, 65 op/s
Dec  1 05:20:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:20:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:20:58.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:20:58 np0005540825 podman[271594]: 2025-12-01 10:20:58.229741538 +0000 UTC m=+0.083194549 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  1 05:20:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:20:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:20:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:20:58.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:20:58 np0005540825 ovn_controller[153404]: 2025-12-01T10:20:58Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:85:c5:7f 10.100.0.24
Dec  1 05:20:58 np0005540825 ovn_controller[153404]: 2025-12-01T10:20:58Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:85:c5:7f 10.100.0.24
Dec  1 05:20:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:20:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:20:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:20:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:20:59 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:20:59 np0005540825 nova_compute[256151]: 2025-12-01 10:20:59.025 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:20:59 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v911: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.3 KiB/s wr, 65 op/s
Dec  1 05:21:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:21:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:21:00.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:21:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:21:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:21:00.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:21:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:21:01 np0005540825 nova_compute[256151]: 2025-12-01 10:21:01.026 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:21:01 np0005540825 nova_compute[256151]: 2025-12-01 10:21:01.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:21:01 np0005540825 nova_compute[256151]: 2025-12-01 10:21:01.050 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:21:01 np0005540825 nova_compute[256151]: 2025-12-01 10:21:01.051 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:21:01 np0005540825 nova_compute[256151]: 2025-12-01 10:21:01.051 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:21:01 np0005540825 nova_compute[256151]: 2025-12-01 10:21:01.052 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 05:21:01 np0005540825 nova_compute[256151]: 2025-12-01 10:21:01.052 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:21:01 np0005540825 nova_compute[256151]: 2025-12-01 10:21:01.303 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:21:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:21:01] "GET /metrics HTTP/1.1" 200 48552 "" "Prometheus/2.51.0"
Dec  1 05:21:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:21:01] "GET /metrics HTTP/1.1" 200 48552 "" "Prometheus/2.51.0"
Dec  1 05:21:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:21:01 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/581613495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:21:01 np0005540825 nova_compute[256151]: 2025-12-01 10:21:01.710 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.657s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:21:01 np0005540825 nova_compute[256151]: 2025-12-01 10:21:01.788 256155 DEBUG nova.virt.libvirt.driver [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  1 05:21:01 np0005540825 nova_compute[256151]: 2025-12-01 10:21:01.789 256155 DEBUG nova.virt.libvirt.driver [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  1 05:21:01 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v912: 353 pgs: 353 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 284 op/s
Dec  1 05:21:02 np0005540825 nova_compute[256151]: 2025-12-01 10:21:02.004 256155 WARNING nova.virt.libvirt.driver [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 05:21:02 np0005540825 nova_compute[256151]: 2025-12-01 10:21:02.006 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4358MB free_disk=59.9217529296875GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 05:21:02 np0005540825 nova_compute[256151]: 2025-12-01 10:21:02.007 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:21:02 np0005540825 nova_compute[256151]: 2025-12-01 10:21:02.007 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:21:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:21:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:21:02.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:21:02 np0005540825 podman[271640]: 2025-12-01 10:21:02.21066981 +0000 UTC m=+0.070098838 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:21:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:21:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:21:02.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:21:02 np0005540825 nova_compute[256151]: 2025-12-01 10:21:02.610 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Instance f38af490-c2f2-4870-a0c3-c676494aad55 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 05:21:02 np0005540825 nova_compute[256151]: 2025-12-01 10:21:02.610 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 05:21:02 np0005540825 nova_compute[256151]: 2025-12-01 10:21:02.610 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 05:21:02 np0005540825 nova_compute[256151]: 2025-12-01 10:21:02.628 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Refreshing inventories for resource provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  1 05:21:02 np0005540825 nova_compute[256151]: 2025-12-01 10:21:02.648 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Updating ProviderTree inventory for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  1 05:21:02 np0005540825 nova_compute[256151]: 2025-12-01 10:21:02.649 256155 DEBUG nova.compute.provider_tree [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Updating inventory in ProviderTree for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 05:21:02 np0005540825 nova_compute[256151]: 2025-12-01 10:21:02.680 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Refreshing aggregate associations for resource provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  1 05:21:02 np0005540825 nova_compute[256151]: 2025-12-01 10:21:02.722 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Refreshing trait associations for resource provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae, traits: HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE4A,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_MMX,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_BMI,HW_CPU_X86_SVM,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AMD_SVM,HW_CPU_X86_BMI2,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE42,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,HW_CPU_X86_ABM,COMPUTE_SECURITY_UEFI_SECURE_BOOT _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  1 05:21:02 np0005540825 nova_compute[256151]: 2025-12-01 10:21:02.765 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:21:03 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:21:03 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2550727021' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:21:03 np0005540825 nova_compute[256151]: 2025-12-01 10:21:03.270 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:21:03 np0005540825 nova_compute[256151]: 2025-12-01 10:21:03.280 256155 DEBUG nova.compute.provider_tree [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed in ProviderTree for provider: 5efe20fe-1981-4bd9-8786-d9fddc89a5ae update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 05:21:03 np0005540825 nova_compute[256151]: 2025-12-01 10:21:03.301 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 05:21:03 np0005540825 nova_compute[256151]: 2025-12-01 10:21:03.333 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 05:21:03 np0005540825 nova_compute[256151]: 2025-12-01 10:21:03.333 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.326s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:21:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:21:03.675Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:21:03 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v913: 353 pgs: 353 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 419 KiB/s rd, 2.1 MiB/s wr, 219 op/s
Dec  1 05:21:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:04 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:21:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:04 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:21:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:04 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:21:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:04 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:21:04 np0005540825 nova_compute[256151]: 2025-12-01 10:21:04.026 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:21:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:21:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:21:04.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:21:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:21:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:21:04.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:21:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:21:04.578 163291 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:21:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:21:04.579 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:21:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:21:04.580 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:21:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:21:05 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v914: 353 pgs: 353 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 419 KiB/s rd, 2.1 MiB/s wr, 219 op/s
Dec  1 05:21:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:21:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:21:06.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:21:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:21:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:21:06.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:21:06 np0005540825 nova_compute[256151]: 2025-12-01 10:21:06.305 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:21:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:21:07.234Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:21:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:21:07.234Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:21:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:21:07.234Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:21:07 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v915: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 419 KiB/s rd, 2.2 MiB/s wr, 219 op/s
Dec  1 05:21:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:21:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:21:08.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:21:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:21:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:21:08.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:21:08 np0005540825 nova_compute[256151]: 2025-12-01 10:21:08.597 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:21:08 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:21:08.597 163291 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '36:10:da', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '4e:5c:35:98:90:37'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 05:21:08 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:21:08.598 163291 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 05:21:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:21:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:09 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:21:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:09 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:21:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:09 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:21:09 np0005540825 nova_compute[256151]: 2025-12-01 10:21:09.029 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:21:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:21:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:21:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:21:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:21:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:21:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:21:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:21:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:21:09 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v916: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 419 KiB/s rd, 2.2 MiB/s wr, 219 op/s
Dec  1 05:21:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:21:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:21:10.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:21:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:21:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:21:10.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:21:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:21:11 np0005540825 nova_compute[256151]: 2025-12-01 10:21:11.308 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:21:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:21:11] "GET /metrics HTTP/1.1" 200 48551 "" "Prometheus/2.51.0"
Dec  1 05:21:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:21:11] "GET /metrics HTTP/1.1" 200 48551 "" "Prometheus/2.51.0"
Dec  1 05:21:11 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v917: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 419 KiB/s rd, 2.2 MiB/s wr, 219 op/s
Dec  1 05:21:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:21:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:21:12.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:21:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:21:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:21:12.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:21:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:21:13.676Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:21:13 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v918: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 13 KiB/s wr, 1 op/s
Dec  1 05:21:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:21:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:21:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:14 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:21:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:14 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:21:14 np0005540825 nova_compute[256151]: 2025-12-01 10:21:14.030 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:21:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:21:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:21:14.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:21:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:21:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:21:14.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:21:14 np0005540825 podman[271720]: 2025-12-01 10:21:14.276254356 +0000 UTC m=+0.137835413 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 05:21:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:21:15 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Dec  1 05:21:15 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:21:15.463637) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  1 05:21:15 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Dec  1 05:21:15 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584475463726, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 1110, "num_deletes": 256, "total_data_size": 1846902, "memory_usage": 1884056, "flush_reason": "Manual Compaction"}
Dec  1 05:21:15 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Dec  1 05:21:15 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584475481954, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 1819441, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27005, "largest_seqno": 28113, "table_properties": {"data_size": 1814239, "index_size": 2598, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11246, "raw_average_key_size": 19, "raw_value_size": 1803652, "raw_average_value_size": 3057, "num_data_blocks": 116, "num_entries": 590, "num_filter_entries": 590, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764584377, "oldest_key_time": 1764584377, "file_creation_time": 1764584475, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Dec  1 05:21:15 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 18395 microseconds, and 8716 cpu microseconds.
Dec  1 05:21:15 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 05:21:15 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:21:15.482039) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 1819441 bytes OK
Dec  1 05:21:15 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:21:15.482079) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Dec  1 05:21:15 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:21:15.483934) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Dec  1 05:21:15 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:21:15.483962) EVENT_LOG_v1 {"time_micros": 1764584475483953, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  1 05:21:15 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:21:15.483991) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  1 05:21:15 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 1841876, prev total WAL file size 1841876, number of live WAL files 2.
Dec  1 05:21:15 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:21:15 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:21:15.485248) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353033' seq:72057594037927935, type:22 .. '6C6F676D00373535' seq:0, type:0; will stop at (end)
Dec  1 05:21:15 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  1 05:21:15 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(1776KB)], [59(14MB)]
Dec  1 05:21:15 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584475485381, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 16964714, "oldest_snapshot_seqno": -1}
Dec  1 05:21:15 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5942 keys, 16845672 bytes, temperature: kUnknown
Dec  1 05:21:15 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584475588918, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 16845672, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16802858, "index_size": 26832, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14917, "raw_key_size": 151783, "raw_average_key_size": 25, "raw_value_size": 16692445, "raw_average_value_size": 2809, "num_data_blocks": 1098, "num_entries": 5942, "num_filter_entries": 5942, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764582410, "oldest_key_time": 0, "file_creation_time": 1764584475, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Dec  1 05:21:15 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 05:21:15 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:21:15.589182) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 16845672 bytes
Dec  1 05:21:15 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:21:15.591353) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 163.7 rd, 162.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 14.4 +0.0 blob) out(16.1 +0.0 blob), read-write-amplify(18.6) write-amplify(9.3) OK, records in: 6468, records dropped: 526 output_compression: NoCompression
Dec  1 05:21:15 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:21:15.591372) EVENT_LOG_v1 {"time_micros": 1764584475591363, "job": 32, "event": "compaction_finished", "compaction_time_micros": 103606, "compaction_time_cpu_micros": 56291, "output_level": 6, "num_output_files": 1, "total_output_size": 16845672, "num_input_records": 6468, "num_output_records": 5942, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  1 05:21:15 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:21:15 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584475591897, "job": 32, "event": "table_file_deletion", "file_number": 61}
Dec  1 05:21:15 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:21:15 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584475595203, "job": 32, "event": "table_file_deletion", "file_number": 59}
Dec  1 05:21:15 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:21:15.485113) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:21:15 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:21:15.595329) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:21:15 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:21:15.595338) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:21:15 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:21:15.595341) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:21:15 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:21:15.595344) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:21:15 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:21:15.595347) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:21:15 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v919: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 16 KiB/s wr, 1 op/s
Dec  1 05:21:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:21:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:21:16.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:21:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:21:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:21:16.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:21:16 np0005540825 nova_compute[256151]: 2025-12-01 10:21:16.334 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:21:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:21:17.235Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:21:17 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v920: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 14 KiB/s wr, 1 op/s
Dec  1 05:21:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:21:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:21:18.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:21:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:21:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:21:18.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:21:18 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:21:18.601 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4d9738cf-2abf-48e2-9303-677669784912, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:21:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:21:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:21:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:21:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:19 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:21:19 np0005540825 nova_compute[256151]: 2025-12-01 10:21:19.032 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:21:19 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v921: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 3.3 KiB/s wr, 0 op/s
Dec  1 05:21:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:21:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:21:20.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:21:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:21:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:21:20.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:21:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:21:21 np0005540825 nova_compute[256151]: 2025-12-01 10:21:21.389 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:21:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:21:21] "GET /metrics HTTP/1.1" 200 48551 "" "Prometheus/2.51.0"
Dec  1 05:21:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:21:21] "GET /metrics HTTP/1.1" 200 48551 "" "Prometheus/2.51.0"
Dec  1 05:21:21 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v922: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 7.7 KiB/s wr, 1 op/s
Dec  1 05:21:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:21:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:21:22.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:21:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:21:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:21:22.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:21:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:21:23.677Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:21:23 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v923: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 510 B/s rd, 7.6 KiB/s wr, 1 op/s
Dec  1 05:21:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:21:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:21:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:21:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:24 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:21:24 np0005540825 nova_compute[256151]: 2025-12-01 10:21:24.034 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:21:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:21:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:21:24.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:21:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:21:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:21:24.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:21:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:21:25 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:21:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:21:25 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v924: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 766 B/s rd, 9.0 KiB/s wr, 2 op/s
Dec  1 05:21:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:21:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:21:26.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:21:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.002000052s ======
Dec  1 05:21:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:21:26.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Dec  1 05:21:26 np0005540825 nova_compute[256151]: 2025-12-01 10:21:26.418 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:21:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:21:27.236Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:21:27 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v925: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 766 B/s rd, 6.3 KiB/s wr, 1 op/s
Dec  1 05:21:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:21:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:21:28.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:21:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:21:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:21:28.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:21:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:21:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:21:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:21:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:29 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:21:29 np0005540825 nova_compute[256151]: 2025-12-01 10:21:29.036 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:21:29 np0005540825 podman[271786]: 2025-12-01 10:21:29.237681032 +0000 UTC m=+0.100051049 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec  1 05:21:29 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v926: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 510 B/s rd, 5.7 KiB/s wr, 1 op/s
Dec  1 05:21:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:21:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:21:30.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:21:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:21:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:21:30.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:21:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:21:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:21:31] "GET /metrics HTTP/1.1" 200 48557 "" "Prometheus/2.51.0"
Dec  1 05:21:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:21:31] "GET /metrics HTTP/1.1" 200 48557 "" "Prometheus/2.51.0"
Dec  1 05:21:31 np0005540825 nova_compute[256151]: 2025-12-01 10:21:31.420 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:21:31 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v927: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 766 B/s rd, 9.0 KiB/s wr, 2 op/s
Dec  1 05:21:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:21:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:21:32.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:21:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:21:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:21:32.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:21:33 np0005540825 podman[271810]: 2025-12-01 10:21:33.228346938 +0000 UTC m=+0.087508981 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 05:21:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:21:33.678Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:21:33 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v928: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 510 B/s rd, 4.7 KiB/s wr, 1 op/s
Dec  1 05:21:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:21:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:21:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:21:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:34 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:21:34 np0005540825 nova_compute[256151]: 2025-12-01 10:21:34.038 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:21:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:21:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:21:34.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:21:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:21:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:21:34.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:21:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:21:35 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v929: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 7.3 KiB/s wr, 1 op/s
Dec  1 05:21:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:21:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:21:36.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:21:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:21:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:21:36.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:21:36 np0005540825 nova_compute[256151]: 2025-12-01 10:21:36.423 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:21:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:21:37.237Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:21:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:21:37.238Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:21:37 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v930: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 6.0 KiB/s wr, 1 op/s
Dec  1 05:21:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:21:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:21:38.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:21:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:21:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:21:38.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:21:38 np0005540825 ovn_controller[153404]: 2025-12-01T10:21:38Z|00055|memory_trim|INFO|Detected inactivity (last active 30021 ms ago): trimming memory
Dec  1 05:21:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:21:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:21:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:21:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:39 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:21:39 np0005540825 nova_compute[256151]: 2025-12-01 10:21:39.040 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:21:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_10:21:39
Dec  1 05:21:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 05:21:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 05:21:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['default.rgw.control', 'images', '.nfs', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'backups', 'default.rgw.log', '.mgr', 'volumes']
Dec  1 05:21:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 05:21:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:21:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:21:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:21:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:21:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:21:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:21:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:21:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:21:39 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v931: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 6.0 KiB/s wr, 1 op/s
Dec  1 05:21:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 05:21:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 05:21:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:21:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:21:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 05:21:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:21:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:21:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:21:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015219164102689327 of space, bias 1.0, pg target 0.4565749230806798 quantized to 32 (current 32)
Dec  1 05:21:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:21:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:21:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:21:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:21:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:21:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  1 05:21:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:21:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:21:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:21:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:21:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:21:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:21:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:21:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:21:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:21:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:21:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:21:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:21:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:21:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:21:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:21:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:21:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:21:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:21:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:21:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:21:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:21:40 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:21:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 05:21:40 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:21:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 05:21:40 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:21:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 05:21:40 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:21:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 05:21:40 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 05:21:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 05:21:40 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:21:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:21:40 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:21:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:21:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:21:40.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:21:40 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:21:40 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:21:40 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:21:40 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:21:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:21:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:21:40.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:21:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:21:40 np0005540825 podman[272012]: 2025-12-01 10:21:40.848124941 +0000 UTC m=+0.073274780 container create a0b21056ffdc6d11ace37ac55230419fef11479c779395d748f112d894bfeddc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_boyd, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:21:40 np0005540825 systemd[1]: Started libpod-conmon-a0b21056ffdc6d11ace37ac55230419fef11479c779395d748f112d894bfeddc.scope.
Dec  1 05:21:40 np0005540825 podman[272012]: 2025-12-01 10:21:40.818742775 +0000 UTC m=+0.043892684 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:21:40 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:21:40 np0005540825 podman[272012]: 2025-12-01 10:21:40.958636501 +0000 UTC m=+0.183786370 container init a0b21056ffdc6d11ace37ac55230419fef11479c779395d748f112d894bfeddc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_boyd, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  1 05:21:40 np0005540825 podman[272012]: 2025-12-01 10:21:40.972460931 +0000 UTC m=+0.197610800 container start a0b21056ffdc6d11ace37ac55230419fef11479c779395d748f112d894bfeddc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_boyd, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  1 05:21:40 np0005540825 podman[272012]: 2025-12-01 10:21:40.977267466 +0000 UTC m=+0.202417375 container attach a0b21056ffdc6d11ace37ac55230419fef11479c779395d748f112d894bfeddc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_boyd, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:21:40 np0005540825 nifty_boyd[272028]: 167 167
Dec  1 05:21:40 np0005540825 systemd[1]: libpod-a0b21056ffdc6d11ace37ac55230419fef11479c779395d748f112d894bfeddc.scope: Deactivated successfully.
Dec  1 05:21:40 np0005540825 podman[272012]: 2025-12-01 10:21:40.981835395 +0000 UTC m=+0.206985254 container died a0b21056ffdc6d11ace37ac55230419fef11479c779395d748f112d894bfeddc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_boyd, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:21:41 np0005540825 systemd[1]: var-lib-containers-storage-overlay-129784783c7003595fe59419367a04542e433bbf4b0a67dd491b343cd2b102b3-merged.mount: Deactivated successfully.
Dec  1 05:21:41 np0005540825 podman[272012]: 2025-12-01 10:21:41.035547445 +0000 UTC m=+0.260697304 container remove a0b21056ffdc6d11ace37ac55230419fef11479c779395d748f112d894bfeddc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_boyd, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:21:41 np0005540825 systemd[1]: libpod-conmon-a0b21056ffdc6d11ace37ac55230419fef11479c779395d748f112d894bfeddc.scope: Deactivated successfully.
Dec  1 05:21:41 np0005540825 podman[272053]: 2025-12-01 10:21:41.294945364 +0000 UTC m=+0.076215037 container create ad2f8da3516b99bf2748b17c4385b6d78ad23a9a5ca5f7dce5684d02999f7b55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_banzai, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:21:41 np0005540825 systemd[1]: Started libpod-conmon-ad2f8da3516b99bf2748b17c4385b6d78ad23a9a5ca5f7dce5684d02999f7b55.scope.
Dec  1 05:21:41 np0005540825 podman[272053]: 2025-12-01 10:21:41.265079346 +0000 UTC m=+0.046349059 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:21:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:21:41] "GET /metrics HTTP/1.1" 200 48554 "" "Prometheus/2.51.0"
Dec  1 05:21:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:21:41] "GET /metrics HTTP/1.1" 200 48554 "" "Prometheus/2.51.0"
Dec  1 05:21:41 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:21:41 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e70d579b0cb82f4ab256afec23f6ebd13212033f83045428638ad73a2b15c6ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:21:41 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e70d579b0cb82f4ab256afec23f6ebd13212033f83045428638ad73a2b15c6ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:21:41 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e70d579b0cb82f4ab256afec23f6ebd13212033f83045428638ad73a2b15c6ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:21:41 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e70d579b0cb82f4ab256afec23f6ebd13212033f83045428638ad73a2b15c6ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:21:41 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e70d579b0cb82f4ab256afec23f6ebd13212033f83045428638ad73a2b15c6ad/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:21:41 np0005540825 podman[272053]: 2025-12-01 10:21:41.410820664 +0000 UTC m=+0.192090387 container init ad2f8da3516b99bf2748b17c4385b6d78ad23a9a5ca5f7dce5684d02999f7b55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_banzai, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid)
Dec  1 05:21:41 np0005540825 podman[272053]: 2025-12-01 10:21:41.426951854 +0000 UTC m=+0.208221517 container start ad2f8da3516b99bf2748b17c4385b6d78ad23a9a5ca5f7dce5684d02999f7b55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_banzai, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:21:41 np0005540825 nova_compute[256151]: 2025-12-01 10:21:41.450 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:21:41 np0005540825 podman[272053]: 2025-12-01 10:21:41.452721505 +0000 UTC m=+0.233991228 container attach ad2f8da3516b99bf2748b17c4385b6d78ad23a9a5ca5f7dce5684d02999f7b55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:21:41 np0005540825 pensive_banzai[272069]: --> passed data devices: 0 physical, 1 LVM
Dec  1 05:21:41 np0005540825 pensive_banzai[272069]: --> All data devices are unavailable
Dec  1 05:21:41 np0005540825 systemd[1]: libpod-ad2f8da3516b99bf2748b17c4385b6d78ad23a9a5ca5f7dce5684d02999f7b55.scope: Deactivated successfully.
Dec  1 05:21:41 np0005540825 podman[272053]: 2025-12-01 10:21:41.871556719 +0000 UTC m=+0.652826392 container died ad2f8da3516b99bf2748b17c4385b6d78ad23a9a5ca5f7dce5684d02999f7b55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec  1 05:21:41 np0005540825 systemd[1]: var-lib-containers-storage-overlay-e70d579b0cb82f4ab256afec23f6ebd13212033f83045428638ad73a2b15c6ad-merged.mount: Deactivated successfully.
Dec  1 05:21:41 np0005540825 podman[272053]: 2025-12-01 10:21:41.931947143 +0000 UTC m=+0.713216806 container remove ad2f8da3516b99bf2748b17c4385b6d78ad23a9a5ca5f7dce5684d02999f7b55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  1 05:21:41 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v932: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 8.0 KiB/s wr, 2 op/s
Dec  1 05:21:41 np0005540825 systemd[1]: libpod-conmon-ad2f8da3516b99bf2748b17c4385b6d78ad23a9a5ca5f7dce5684d02999f7b55.scope: Deactivated successfully.
Dec  1 05:21:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:21:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:21:42.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:21:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:21:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:21:42.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:21:42 np0005540825 podman[272189]: 2025-12-01 10:21:42.646724197 +0000 UTC m=+0.062767286 container create 62f85fb3c893d769fdc561eb5220ad4c82b464a51f257ce6624cf1cad036ec83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 05:21:42 np0005540825 systemd[1]: Started libpod-conmon-62f85fb3c893d769fdc561eb5220ad4c82b464a51f257ce6624cf1cad036ec83.scope.
Dec  1 05:21:42 np0005540825 podman[272189]: 2025-12-01 10:21:42.616597772 +0000 UTC m=+0.032640901 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:21:42 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:21:42 np0005540825 podman[272189]: 2025-12-01 10:21:42.735618034 +0000 UTC m=+0.151661083 container init 62f85fb3c893d769fdc561eb5220ad4c82b464a51f257ce6624cf1cad036ec83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  1 05:21:42 np0005540825 podman[272189]: 2025-12-01 10:21:42.74162746 +0000 UTC m=+0.157670509 container start 62f85fb3c893d769fdc561eb5220ad4c82b464a51f257ce6624cf1cad036ec83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_panini, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:21:42 np0005540825 podman[272189]: 2025-12-01 10:21:42.744757822 +0000 UTC m=+0.160800871 container attach 62f85fb3c893d769fdc561eb5220ad4c82b464a51f257ce6624cf1cad036ec83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  1 05:21:42 np0005540825 elegant_panini[272205]: 167 167
Dec  1 05:21:42 np0005540825 systemd[1]: libpod-62f85fb3c893d769fdc561eb5220ad4c82b464a51f257ce6624cf1cad036ec83.scope: Deactivated successfully.
Dec  1 05:21:42 np0005540825 podman[272189]: 2025-12-01 10:21:42.747455222 +0000 UTC m=+0.163498321 container died 62f85fb3c893d769fdc561eb5220ad4c82b464a51f257ce6624cf1cad036ec83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_panini, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  1 05:21:42 np0005540825 systemd[1]: var-lib-containers-storage-overlay-91920efe945b48c1182c46b138c43a99349fdbe0fa166e8f378597642d689d47-merged.mount: Deactivated successfully.
Dec  1 05:21:42 np0005540825 podman[272189]: 2025-12-01 10:21:42.788388509 +0000 UTC m=+0.204431568 container remove 62f85fb3c893d769fdc561eb5220ad4c82b464a51f257ce6624cf1cad036ec83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_panini, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec  1 05:21:42 np0005540825 systemd[1]: libpod-conmon-62f85fb3c893d769fdc561eb5220ad4c82b464a51f257ce6624cf1cad036ec83.scope: Deactivated successfully.
Dec  1 05:21:43 np0005540825 podman[272227]: 2025-12-01 10:21:43.007439197 +0000 UTC m=+0.067928111 container create 11edc2dc4baedeb645859a41ef0cf666e8b2d0fb038ed5a5a2ab6f72a70a61eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_allen, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:21:43 np0005540825 systemd[1]: Started libpod-conmon-11edc2dc4baedeb645859a41ef0cf666e8b2d0fb038ed5a5a2ab6f72a70a61eb.scope.
Dec  1 05:21:43 np0005540825 podman[272227]: 2025-12-01 10:21:42.980751691 +0000 UTC m=+0.041240645 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:21:43 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:21:43 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a9e04ba4a8a714e207795e8438fc17ef515b28841f29dd8a4fe420b8460544b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:21:43 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a9e04ba4a8a714e207795e8438fc17ef515b28841f29dd8a4fe420b8460544b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:21:43 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a9e04ba4a8a714e207795e8438fc17ef515b28841f29dd8a4fe420b8460544b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:21:43 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a9e04ba4a8a714e207795e8438fc17ef515b28841f29dd8a4fe420b8460544b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:21:43 np0005540825 podman[272227]: 2025-12-01 10:21:43.126681884 +0000 UTC m=+0.187170858 container init 11edc2dc4baedeb645859a41ef0cf666e8b2d0fb038ed5a5a2ab6f72a70a61eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_allen, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  1 05:21:43 np0005540825 podman[272227]: 2025-12-01 10:21:43.144975771 +0000 UTC m=+0.205464665 container start 11edc2dc4baedeb645859a41ef0cf666e8b2d0fb038ed5a5a2ab6f72a70a61eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:21:43 np0005540825 podman[272227]: 2025-12-01 10:21:43.148525283 +0000 UTC m=+0.209014207 container attach 11edc2dc4baedeb645859a41ef0cf666e8b2d0fb038ed5a5a2ab6f72a70a61eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  1 05:21:43 np0005540825 affectionate_allen[272243]: {
Dec  1 05:21:43 np0005540825 affectionate_allen[272243]:    "1": [
Dec  1 05:21:43 np0005540825 affectionate_allen[272243]:        {
Dec  1 05:21:43 np0005540825 affectionate_allen[272243]:            "devices": [
Dec  1 05:21:43 np0005540825 affectionate_allen[272243]:                "/dev/loop3"
Dec  1 05:21:43 np0005540825 affectionate_allen[272243]:            ],
Dec  1 05:21:43 np0005540825 affectionate_allen[272243]:            "lv_name": "ceph_lv0",
Dec  1 05:21:43 np0005540825 affectionate_allen[272243]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:21:43 np0005540825 affectionate_allen[272243]:            "lv_size": "21470642176",
Dec  1 05:21:43 np0005540825 affectionate_allen[272243]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 05:21:43 np0005540825 affectionate_allen[272243]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:21:43 np0005540825 affectionate_allen[272243]:            "name": "ceph_lv0",
Dec  1 05:21:43 np0005540825 affectionate_allen[272243]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:21:43 np0005540825 affectionate_allen[272243]:            "tags": {
Dec  1 05:21:43 np0005540825 affectionate_allen[272243]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:21:43 np0005540825 affectionate_allen[272243]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:21:43 np0005540825 affectionate_allen[272243]:                "ceph.cephx_lockbox_secret": "",
Dec  1 05:21:43 np0005540825 affectionate_allen[272243]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 05:21:43 np0005540825 affectionate_allen[272243]:                "ceph.cluster_name": "ceph",
Dec  1 05:21:43 np0005540825 affectionate_allen[272243]:                "ceph.crush_device_class": "",
Dec  1 05:21:43 np0005540825 affectionate_allen[272243]:                "ceph.encrypted": "0",
Dec  1 05:21:43 np0005540825 affectionate_allen[272243]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 05:21:43 np0005540825 affectionate_allen[272243]:                "ceph.osd_id": "1",
Dec  1 05:21:43 np0005540825 affectionate_allen[272243]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 05:21:43 np0005540825 affectionate_allen[272243]:                "ceph.type": "block",
Dec  1 05:21:43 np0005540825 affectionate_allen[272243]:                "ceph.vdo": "0",
Dec  1 05:21:43 np0005540825 affectionate_allen[272243]:                "ceph.with_tpm": "0"
Dec  1 05:21:43 np0005540825 affectionate_allen[272243]:            },
Dec  1 05:21:43 np0005540825 affectionate_allen[272243]:            "type": "block",
Dec  1 05:21:43 np0005540825 affectionate_allen[272243]:            "vg_name": "ceph_vg0"
Dec  1 05:21:43 np0005540825 affectionate_allen[272243]:        }
Dec  1 05:21:43 np0005540825 affectionate_allen[272243]:    ]
Dec  1 05:21:43 np0005540825 affectionate_allen[272243]: }
Dec  1 05:21:43 np0005540825 systemd[1]: libpod-11edc2dc4baedeb645859a41ef0cf666e8b2d0fb038ed5a5a2ab6f72a70a61eb.scope: Deactivated successfully.
Dec  1 05:21:43 np0005540825 podman[272227]: 2025-12-01 10:21:43.489670643 +0000 UTC m=+0.550159557 container died 11edc2dc4baedeb645859a41ef0cf666e8b2d0fb038ed5a5a2ab6f72a70a61eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  1 05:21:43 np0005540825 systemd[1]: var-lib-containers-storage-overlay-6a9e04ba4a8a714e207795e8438fc17ef515b28841f29dd8a4fe420b8460544b-merged.mount: Deactivated successfully.
Dec  1 05:21:43 np0005540825 podman[272227]: 2025-12-01 10:21:43.546093833 +0000 UTC m=+0.606582717 container remove 11edc2dc4baedeb645859a41ef0cf666e8b2d0fb038ed5a5a2ab6f72a70a61eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_allen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:21:43 np0005540825 systemd[1]: libpod-conmon-11edc2dc4baedeb645859a41ef0cf666e8b2d0fb038ed5a5a2ab6f72a70a61eb.scope: Deactivated successfully.
Dec  1 05:21:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:21:43.679Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:21:43 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v933: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 4.7 KiB/s wr, 1 op/s
Dec  1 05:21:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:21:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:44 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:21:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:44 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:21:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:44 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:21:44 np0005540825 nova_compute[256151]: 2025-12-01 10:21:44.066 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:21:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:21:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:21:44.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:21:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:21:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:21:44.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:21:44 np0005540825 podman[272362]: 2025-12-01 10:21:44.30505514 +0000 UTC m=+0.069422830 container create a33d3efd291fa565df28436d416ef5d9d333e212ab6c071e19619974f6b2443a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_nightingale, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:21:44 np0005540825 systemd[1]: Started libpod-conmon-a33d3efd291fa565df28436d416ef5d9d333e212ab6c071e19619974f6b2443a.scope.
Dec  1 05:21:44 np0005540825 podman[272362]: 2025-12-01 10:21:44.276040914 +0000 UTC m=+0.040408664 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:21:44 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:21:44 np0005540825 podman[272362]: 2025-12-01 10:21:44.404723817 +0000 UTC m=+0.169091577 container init a33d3efd291fa565df28436d416ef5d9d333e212ab6c071e19619974f6b2443a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:21:44 np0005540825 podman[272362]: 2025-12-01 10:21:44.418245649 +0000 UTC m=+0.182613349 container start a33d3efd291fa565df28436d416ef5d9d333e212ab6c071e19619974f6b2443a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  1 05:21:44 np0005540825 podman[272362]: 2025-12-01 10:21:44.422077959 +0000 UTC m=+0.186445739 container attach a33d3efd291fa565df28436d416ef5d9d333e212ab6c071e19619974f6b2443a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True)
Dec  1 05:21:44 np0005540825 boring_nightingale[272379]: 167 167
Dec  1 05:21:44 np0005540825 systemd[1]: libpod-a33d3efd291fa565df28436d416ef5d9d333e212ab6c071e19619974f6b2443a.scope: Deactivated successfully.
Dec  1 05:21:44 np0005540825 podman[272362]: 2025-12-01 10:21:44.427984243 +0000 UTC m=+0.192351963 container died a33d3efd291fa565df28436d416ef5d9d333e212ab6c071e19619974f6b2443a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_nightingale, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:21:44 np0005540825 systemd[1]: var-lib-containers-storage-overlay-f791d7f36be9b7db3a264883c25b69098de478923cd050cab5ad756eeee5fec7-merged.mount: Deactivated successfully.
Dec  1 05:21:44 np0005540825 podman[272362]: 2025-12-01 10:21:44.475428149 +0000 UTC m=+0.239795839 container remove a33d3efd291fa565df28436d416ef5d9d333e212ab6c071e19619974f6b2443a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_nightingale, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec  1 05:21:44 np0005540825 systemd[1]: libpod-conmon-a33d3efd291fa565df28436d416ef5d9d333e212ab6c071e19619974f6b2443a.scope: Deactivated successfully.
Dec  1 05:21:44 np0005540825 podman[272376]: 2025-12-01 10:21:44.512278329 +0000 UTC m=+0.151543329 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true)
Dec  1 05:21:44 np0005540825 podman[272428]: 2025-12-01 10:21:44.682919186 +0000 UTC m=+0.038205197 container create 2fcaf6b9fcf91f9056b23903b388037d2f9955bd68b1ae56b5a3675fc83482b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_yonath, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:21:44 np0005540825 systemd[1]: Started libpod-conmon-2fcaf6b9fcf91f9056b23903b388037d2f9955bd68b1ae56b5a3675fc83482b8.scope.
Dec  1 05:21:44 np0005540825 podman[272428]: 2025-12-01 10:21:44.665880352 +0000 UTC m=+0.021166373 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:21:44 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:21:44 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041a4e16b51cf28a0a31151aa64bbad685093e29075e4e9a230d1400bb834c79/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:21:44 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041a4e16b51cf28a0a31151aa64bbad685093e29075e4e9a230d1400bb834c79/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:21:44 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041a4e16b51cf28a0a31151aa64bbad685093e29075e4e9a230d1400bb834c79/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:21:44 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041a4e16b51cf28a0a31151aa64bbad685093e29075e4e9a230d1400bb834c79/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:21:44 np0005540825 podman[272428]: 2025-12-01 10:21:44.77938669 +0000 UTC m=+0.134672731 container init 2fcaf6b9fcf91f9056b23903b388037d2f9955bd68b1ae56b5a3675fc83482b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_yonath, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:21:44 np0005540825 podman[272428]: 2025-12-01 10:21:44.79629391 +0000 UTC m=+0.151579931 container start 2fcaf6b9fcf91f9056b23903b388037d2f9955bd68b1ae56b5a3675fc83482b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:21:44 np0005540825 podman[272428]: 2025-12-01 10:21:44.800602203 +0000 UTC m=+0.155888254 container attach 2fcaf6b9fcf91f9056b23903b388037d2f9955bd68b1ae56b5a3675fc83482b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_yonath, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  1 05:21:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:21:45 np0005540825 lvm[272539]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 05:21:45 np0005540825 lvm[272539]: VG ceph_vg0 finished
Dec  1 05:21:45 np0005540825 naughty_yonath[272444]: {}
Dec  1 05:21:45 np0005540825 systemd[1]: libpod-2fcaf6b9fcf91f9056b23903b388037d2f9955bd68b1ae56b5a3675fc83482b8.scope: Deactivated successfully.
Dec  1 05:21:45 np0005540825 podman[272428]: 2025-12-01 10:21:45.595196708 +0000 UTC m=+0.950482719 container died 2fcaf6b9fcf91f9056b23903b388037d2f9955bd68b1ae56b5a3675fc83482b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_yonath, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:21:45 np0005540825 systemd[1]: libpod-2fcaf6b9fcf91f9056b23903b388037d2f9955bd68b1ae56b5a3675fc83482b8.scope: Consumed 1.285s CPU time.
Dec  1 05:21:45 np0005540825 systemd[1]: var-lib-containers-storage-overlay-041a4e16b51cf28a0a31151aa64bbad685093e29075e4e9a230d1400bb834c79-merged.mount: Deactivated successfully.
Dec  1 05:21:45 np0005540825 podman[272428]: 2025-12-01 10:21:45.653783894 +0000 UTC m=+1.009069895 container remove 2fcaf6b9fcf91f9056b23903b388037d2f9955bd68b1ae56b5a3675fc83482b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_yonath, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec  1 05:21:45 np0005540825 systemd[1]: libpod-conmon-2fcaf6b9fcf91f9056b23903b388037d2f9955bd68b1ae56b5a3675fc83482b8.scope: Deactivated successfully.
Dec  1 05:21:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 05:21:45 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:21:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 05:21:45 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:21:45 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v934: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 14 KiB/s wr, 2 op/s
Dec  1 05:21:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:21:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:21:46.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:21:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:21:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:21:46.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:21:46 np0005540825 nova_compute[256151]: 2025-12-01 10:21:46.453 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:21:46 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:21:46 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:21:46 np0005540825 nova_compute[256151]: 2025-12-01 10:21:46.916 256155 DEBUG oslo_concurrency.lockutils [None req-765d7d07-dd7c-4601-a771-c522f34f10c0 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "f38af490-c2f2-4870-a0c3-c676494aad55" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:21:46 np0005540825 nova_compute[256151]: 2025-12-01 10:21:46.916 256155 DEBUG oslo_concurrency.lockutils [None req-765d7d07-dd7c-4601-a771-c522f34f10c0 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "f38af490-c2f2-4870-a0c3-c676494aad55" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:21:46 np0005540825 nova_compute[256151]: 2025-12-01 10:21:46.916 256155 DEBUG oslo_concurrency.lockutils [None req-765d7d07-dd7c-4601-a771-c522f34f10c0 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "f38af490-c2f2-4870-a0c3-c676494aad55-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:21:46 np0005540825 nova_compute[256151]: 2025-12-01 10:21:46.917 256155 DEBUG oslo_concurrency.lockutils [None req-765d7d07-dd7c-4601-a771-c522f34f10c0 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "f38af490-c2f2-4870-a0c3-c676494aad55-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:21:46 np0005540825 nova_compute[256151]: 2025-12-01 10:21:46.917 256155 DEBUG oslo_concurrency.lockutils [None req-765d7d07-dd7c-4601-a771-c522f34f10c0 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "f38af490-c2f2-4870-a0c3-c676494aad55-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:21:46 np0005540825 nova_compute[256151]: 2025-12-01 10:21:46.919 256155 INFO nova.compute.manager [None req-765d7d07-dd7c-4601-a771-c522f34f10c0 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Terminating instance#033[00m
Dec  1 05:21:46 np0005540825 nova_compute[256151]: 2025-12-01 10:21:46.920 256155 DEBUG nova.compute.manager [None req-765d7d07-dd7c-4601-a771-c522f34f10c0 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 05:21:47 np0005540825 kernel: tape5d534a7-8e (unregistering): left promiscuous mode
Dec  1 05:21:47 np0005540825 NetworkManager[48963]: <info>  [1764584507.0778] device (tape5d534a7-8e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 05:21:47 np0005540825 ovn_controller[153404]: 2025-12-01T10:21:47Z|00056|binding|INFO|Releasing lport e5d534a7-8e7b-4873-8258-5fac7c090568 from this chassis (sb_readonly=0)
Dec  1 05:21:47 np0005540825 ovn_controller[153404]: 2025-12-01T10:21:47Z|00057|binding|INFO|Setting lport e5d534a7-8e7b-4873-8258-5fac7c090568 down in Southbound
Dec  1 05:21:47 np0005540825 ovn_controller[153404]: 2025-12-01T10:21:47Z|00058|binding|INFO|Removing iface tape5d534a7-8e ovn-installed in OVS
Dec  1 05:21:47 np0005540825 nova_compute[256151]: 2025-12-01 10:21:47.093 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:21:47 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:21:47.100 163291 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:c5:7f 10.100.0.24'], port_security=['fa:16:3e:85:c5:7f 10.100.0.24'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.24/28', 'neutron:device_id': 'f38af490-c2f2-4870-a0c3-c676494aad55', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e5b3de9-56f5-4f4d-87c1-c01596567748', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f6be4e572624210b91193c011607c08', 'neutron:revision_number': '4', 'neutron:security_group_ids': '94f4da7f-2053-479c-a6f3-8ad7a572c1e8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26afc505-4eba-4dd1-91d0-5142d49a2356, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3429b436d0>], logical_port=e5d534a7-8e7b-4873-8258-5fac7c090568) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3429b436d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 05:21:47 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:21:47.102 163291 INFO neutron.agent.ovn.metadata.agent [-] Port e5d534a7-8e7b-4873-8258-5fac7c090568 in datapath 0e5b3de9-56f5-4f4d-87c1-c01596567748 unbound from our chassis#033[00m
Dec  1 05:21:47 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:21:47.103 163291 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0e5b3de9-56f5-4f4d-87c1-c01596567748, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  1 05:21:47 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:21:47.104 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[aa1d1118-cce3-489d-8aef-7eb9be9102ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:21:47 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:21:47.105 163291 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0e5b3de9-56f5-4f4d-87c1-c01596567748 namespace which is not needed anymore#033[00m
Dec  1 05:21:47 np0005540825 nova_compute[256151]: 2025-12-01 10:21:47.129 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:21:47 np0005540825 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000007.scope: Deactivated successfully.
Dec  1 05:21:47 np0005540825 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000007.scope: Consumed 16.553s CPU time.
Dec  1 05:21:47 np0005540825 systemd-machined[216307]: Machine qemu-3-instance-00000007 terminated.
Dec  1 05:21:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:21:47.239Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:21:47 np0005540825 neutron-haproxy-ovnmeta-0e5b3de9-56f5-4f4d-87c1-c01596567748[271567]: [NOTICE]   (271571) : haproxy version is 2.8.14-c23fe91
Dec  1 05:21:47 np0005540825 neutron-haproxy-ovnmeta-0e5b3de9-56f5-4f4d-87c1-c01596567748[271567]: [NOTICE]   (271571) : path to executable is /usr/sbin/haproxy
Dec  1 05:21:47 np0005540825 neutron-haproxy-ovnmeta-0e5b3de9-56f5-4f4d-87c1-c01596567748[271567]: [WARNING]  (271571) : Exiting Master process...
Dec  1 05:21:47 np0005540825 neutron-haproxy-ovnmeta-0e5b3de9-56f5-4f4d-87c1-c01596567748[271567]: [WARNING]  (271571) : Exiting Master process...
Dec  1 05:21:47 np0005540825 neutron-haproxy-ovnmeta-0e5b3de9-56f5-4f4d-87c1-c01596567748[271567]: [ALERT]    (271571) : Current worker (271573) exited with code 143 (Terminated)
Dec  1 05:21:47 np0005540825 neutron-haproxy-ovnmeta-0e5b3de9-56f5-4f4d-87c1-c01596567748[271567]: [WARNING]  (271571) : All workers exited. Exiting... (0)
Dec  1 05:21:47 np0005540825 systemd[1]: libpod-27f1ef040fba6798c8a5aac2225c1240c4a1fc1e022f891a2d16042bd2730860.scope: Deactivated successfully.
Dec  1 05:21:47 np0005540825 podman[272610]: 2025-12-01 10:21:47.276631662 +0000 UTC m=+0.055803825 container died 27f1ef040fba6798c8a5aac2225c1240c4a1fc1e022f891a2d16042bd2730860 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0e5b3de9-56f5-4f4d-87c1-c01596567748, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  1 05:21:47 np0005540825 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-27f1ef040fba6798c8a5aac2225c1240c4a1fc1e022f891a2d16042bd2730860-userdata-shm.mount: Deactivated successfully.
Dec  1 05:21:47 np0005540825 systemd[1]: var-lib-containers-storage-overlay-a3777ef8f94a0385a4e6cb0dd904f418748ac565a1093fa51e7f3ed76a8067db-merged.mount: Deactivated successfully.
Dec  1 05:21:47 np0005540825 podman[272610]: 2025-12-01 10:21:47.325545836 +0000 UTC m=+0.104717999 container cleanup 27f1ef040fba6798c8a5aac2225c1240c4a1fc1e022f891a2d16042bd2730860 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0e5b3de9-56f5-4f4d-87c1-c01596567748, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  1 05:21:47 np0005540825 NetworkManager[48963]: <info>  [1764584507.3459] manager: (tape5d534a7-8e): new Tun device (/org/freedesktop/NetworkManager/Devices/41)
Dec  1 05:21:47 np0005540825 systemd[1]: libpod-conmon-27f1ef040fba6798c8a5aac2225c1240c4a1fc1e022f891a2d16042bd2730860.scope: Deactivated successfully.
Dec  1 05:21:47 np0005540825 nova_compute[256151]: 2025-12-01 10:21:47.362 256155 INFO nova.virt.libvirt.driver [-] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Instance destroyed successfully.#033[00m
Dec  1 05:21:47 np0005540825 nova_compute[256151]: 2025-12-01 10:21:47.363 256155 DEBUG nova.objects.instance [None req-765d7d07-dd7c-4601-a771-c522f34f10c0 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lazy-loading 'resources' on Instance uuid f38af490-c2f2-4870-a0c3-c676494aad55 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 05:21:47 np0005540825 nova_compute[256151]: 2025-12-01 10:21:47.382 256155 DEBUG nova.virt.libvirt.vif [None req-765d7d07-dd7c-4601-a771-c522f34f10c0 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T10:20:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-639918817',display_name='tempest-TestNetworkBasicOps-server-639918817',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-639918817',id=7,image_ref='8f75d6de-6ce0-44e1-b417-d0111424475b',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHtnv9NpVKDTfH0AF7Ug60W/MxmIJo2CT7fYSCvCKLYl7NoVFoTmizifAIbXo2JZu5ZWoR0iRQ9Zn+lHLe3BED+b0i3R0WUHKDORFyNZe5Erfivryp4oxHPAOWYul9Ucbg==',key_name='tempest-TestNetworkBasicOps-1633426243',keypairs=<?>,launch_index=0,launched_at=2025-12-01T10:20:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9f6be4e572624210b91193c011607c08',ramdisk_id='',reservation_id='r-vzkf3nd0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8f75d6de-6ce0-44e1-b417-d0111424475b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1248115384',owner_user_name='tempest-TestNetworkBasicOps-1248115384-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T10:20:45Z,user_data=None,user_id='5b56a238daf0445798410e51caada0ff',uuid=f38af490-c2f2-4870-a0c3-c676494aad55,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e5d534a7-8e7b-4873-8258-5fac7c090568", "address": "fa:16:3e:85:c5:7f", "network": {"id": "0e5b3de9-56f5-4f4d-87c1-c01596567748", "bridge": "br-int", "label": "tempest-network-smoke--1903173849", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5d534a7-8e", "ovs_interfaceid": "e5d534a7-8e7b-4873-8258-5fac7c090568", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 05:21:47 np0005540825 nova_compute[256151]: 2025-12-01 10:21:47.383 256155 DEBUG nova.network.os_vif_util [None req-765d7d07-dd7c-4601-a771-c522f34f10c0 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Converting VIF {"id": "e5d534a7-8e7b-4873-8258-5fac7c090568", "address": "fa:16:3e:85:c5:7f", "network": {"id": "0e5b3de9-56f5-4f4d-87c1-c01596567748", "bridge": "br-int", "label": "tempest-network-smoke--1903173849", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5d534a7-8e", "ovs_interfaceid": "e5d534a7-8e7b-4873-8258-5fac7c090568", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 05:21:47 np0005540825 nova_compute[256151]: 2025-12-01 10:21:47.383 256155 DEBUG nova.network.os_vif_util [None req-765d7d07-dd7c-4601-a771-c522f34f10c0 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:85:c5:7f,bridge_name='br-int',has_traffic_filtering=True,id=e5d534a7-8e7b-4873-8258-5fac7c090568,network=Network(0e5b3de9-56f5-4f4d-87c1-c01596567748),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5d534a7-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 05:21:47 np0005540825 nova_compute[256151]: 2025-12-01 10:21:47.384 256155 DEBUG os_vif [None req-765d7d07-dd7c-4601-a771-c522f34f10c0 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:85:c5:7f,bridge_name='br-int',has_traffic_filtering=True,id=e5d534a7-8e7b-4873-8258-5fac7c090568,network=Network(0e5b3de9-56f5-4f4d-87c1-c01596567748),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5d534a7-8e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 05:21:47 np0005540825 nova_compute[256151]: 2025-12-01 10:21:47.386 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:21:47 np0005540825 nova_compute[256151]: 2025-12-01 10:21:47.386 256155 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape5d534a7-8e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:21:47 np0005540825 nova_compute[256151]: 2025-12-01 10:21:47.389 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:21:47 np0005540825 nova_compute[256151]: 2025-12-01 10:21:47.393 256155 INFO os_vif [None req-765d7d07-dd7c-4601-a771-c522f34f10c0 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:85:c5:7f,bridge_name='br-int',has_traffic_filtering=True,id=e5d534a7-8e7b-4873-8258-5fac7c090568,network=Network(0e5b3de9-56f5-4f4d-87c1-c01596567748),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5d534a7-8e')#033[00m
Dec  1 05:21:47 np0005540825 podman[272641]: 2025-12-01 10:21:47.436171239 +0000 UTC m=+0.073946528 container remove 27f1ef040fba6798c8a5aac2225c1240c4a1fc1e022f891a2d16042bd2730860 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0e5b3de9-56f5-4f4d-87c1-c01596567748, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  1 05:21:47 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:21:47.442 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[0679de16-7d56-4c03-a7ee-b0cb430328f2]: (4, ('Mon Dec  1 10:21:47 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0e5b3de9-56f5-4f4d-87c1-c01596567748 (27f1ef040fba6798c8a5aac2225c1240c4a1fc1e022f891a2d16042bd2730860)\n27f1ef040fba6798c8a5aac2225c1240c4a1fc1e022f891a2d16042bd2730860\nMon Dec  1 10:21:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0e5b3de9-56f5-4f4d-87c1-c01596567748 (27f1ef040fba6798c8a5aac2225c1240c4a1fc1e022f891a2d16042bd2730860)\n27f1ef040fba6798c8a5aac2225c1240c4a1fc1e022f891a2d16042bd2730860\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:21:47 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:21:47.445 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[63e7a7bb-7852-4bac-8bb0-3ed107913e4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:21:47 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:21:47.447 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0e5b3de9-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:21:47 np0005540825 kernel: tap0e5b3de9-50: left promiscuous mode
Dec  1 05:21:47 np0005540825 nova_compute[256151]: 2025-12-01 10:21:47.449 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:21:47 np0005540825 nova_compute[256151]: 2025-12-01 10:21:47.475 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:21:47 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:21:47.478 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[0a119568-13bb-42cc-8d46-59e315fe8813]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:21:47 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:21:47.496 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[2af72e0c-011e-4158-beee-f63f3e86dcf8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:21:47 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:21:47.497 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[544036be-e89b-42ff-a622-4f7b15121a72]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:21:47 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:21:47.520 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[8e506fcb-321c-4694-96ad-b7cd248ffeb8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 431374, 'reachable_time': 26981, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272683, 'error': None, 'target': 'ovnmeta-0e5b3de9-56f5-4f4d-87c1-c01596567748', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:21:47 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:21:47.524 163408 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0e5b3de9-56f5-4f4d-87c1-c01596567748 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  1 05:21:47 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:21:47.524 163408 DEBUG oslo.privsep.daemon [-] privsep: reply[bc87e271-e3ad-494a-b440-e435fe17bc76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:21:47 np0005540825 systemd[1]: run-netns-ovnmeta\x2d0e5b3de9\x2d56f5\x2d4f4d\x2d87c1\x2dc01596567748.mount: Deactivated successfully.
Dec  1 05:21:47 np0005540825 nova_compute[256151]: 2025-12-01 10:21:47.836 256155 INFO nova.virt.libvirt.driver [None req-765d7d07-dd7c-4601-a771-c522f34f10c0 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Deleting instance files /var/lib/nova/instances/f38af490-c2f2-4870-a0c3-c676494aad55_del#033[00m
Dec  1 05:21:47 np0005540825 nova_compute[256151]: 2025-12-01 10:21:47.837 256155 INFO nova.virt.libvirt.driver [None req-765d7d07-dd7c-4601-a771-c522f34f10c0 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Deletion of /var/lib/nova/instances/f38af490-c2f2-4870-a0c3-c676494aad55_del complete#033[00m
Dec  1 05:21:47 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v935: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 11 KiB/s wr, 2 op/s
Dec  1 05:21:47 np0005540825 nova_compute[256151]: 2025-12-01 10:21:47.954 256155 INFO nova.compute.manager [None req-765d7d07-dd7c-4601-a771-c522f34f10c0 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Took 1.03 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 05:21:47 np0005540825 nova_compute[256151]: 2025-12-01 10:21:47.956 256155 DEBUG oslo.service.loopingcall [None req-765d7d07-dd7c-4601-a771-c522f34f10c0 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 05:21:47 np0005540825 nova_compute[256151]: 2025-12-01 10:21:47.956 256155 DEBUG nova.compute.manager [-] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 05:21:47 np0005540825 nova_compute[256151]: 2025-12-01 10:21:47.956 256155 DEBUG nova.network.neutron [-] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 05:21:48 np0005540825 nova_compute[256151]: 2025-12-01 10:21:48.119 256155 DEBUG nova.compute.manager [req-d6408115-b4f6-4eef-897a-ca531fae538a req-e8274ba0-4569-4b56-bc47-e2b8447021d8 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Received event network-vif-unplugged-e5d534a7-8e7b-4873-8258-5fac7c090568 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:21:48 np0005540825 nova_compute[256151]: 2025-12-01 10:21:48.119 256155 DEBUG oslo_concurrency.lockutils [req-d6408115-b4f6-4eef-897a-ca531fae538a req-e8274ba0-4569-4b56-bc47-e2b8447021d8 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "f38af490-c2f2-4870-a0c3-c676494aad55-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:21:48 np0005540825 nova_compute[256151]: 2025-12-01 10:21:48.120 256155 DEBUG oslo_concurrency.lockutils [req-d6408115-b4f6-4eef-897a-ca531fae538a req-e8274ba0-4569-4b56-bc47-e2b8447021d8 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "f38af490-c2f2-4870-a0c3-c676494aad55-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:21:48 np0005540825 nova_compute[256151]: 2025-12-01 10:21:48.120 256155 DEBUG oslo_concurrency.lockutils [req-d6408115-b4f6-4eef-897a-ca531fae538a req-e8274ba0-4569-4b56-bc47-e2b8447021d8 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "f38af490-c2f2-4870-a0c3-c676494aad55-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:21:48 np0005540825 nova_compute[256151]: 2025-12-01 10:21:48.120 256155 DEBUG nova.compute.manager [req-d6408115-b4f6-4eef-897a-ca531fae538a req-e8274ba0-4569-4b56-bc47-e2b8447021d8 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] No waiting events found dispatching network-vif-unplugged-e5d534a7-8e7b-4873-8258-5fac7c090568 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 05:21:48 np0005540825 nova_compute[256151]: 2025-12-01 10:21:48.120 256155 DEBUG nova.compute.manager [req-d6408115-b4f6-4eef-897a-ca531fae538a req-e8274ba0-4569-4b56-bc47-e2b8447021d8 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Received event network-vif-unplugged-e5d534a7-8e7b-4873-8258-5fac7c090568 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  1 05:21:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:21:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:21:48.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:21:48 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:21:48.304 163291 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '36:10:da', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '4e:5c:35:98:90:37'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 05:21:48 np0005540825 nova_compute[256151]: 2025-12-01 10:21:48.305 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:21:48 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:21:48.306 163291 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 05:21:48 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:21:48.307 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4d9738cf-2abf-48e2-9303-677669784912, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:21:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:21:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:21:48.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:21:48 np0005540825 nova_compute[256151]: 2025-12-01 10:21:48.885 256155 DEBUG nova.network.neutron [-] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 05:21:48 np0005540825 nova_compute[256151]: 2025-12-01 10:21:48.902 256155 INFO nova.compute.manager [-] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Took 0.95 seconds to deallocate network for instance.#033[00m
Dec  1 05:21:48 np0005540825 nova_compute[256151]: 2025-12-01 10:21:48.947 256155 DEBUG oslo_concurrency.lockutils [None req-765d7d07-dd7c-4601-a771-c522f34f10c0 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:21:48 np0005540825 nova_compute[256151]: 2025-12-01 10:21:48.947 256155 DEBUG oslo_concurrency.lockutils [None req-765d7d07-dd7c-4601-a771-c522f34f10c0 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:21:48 np0005540825 nova_compute[256151]: 2025-12-01 10:21:48.951 256155 DEBUG nova.compute.manager [req-83a004ec-c6a4-487e-9d45-6c0cd64451a0 req-ff8e9911-a9a9-4cbe-a09a-01a7d4279f35 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Received event network-vif-deleted-e5d534a7-8e7b-4873-8258-5fac7c090568 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:21:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:21:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:21:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:49 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:21:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:49 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:21:49 np0005540825 nova_compute[256151]: 2025-12-01 10:21:49.011 256155 DEBUG oslo_concurrency.processutils [None req-765d7d07-dd7c-4601-a771-c522f34f10c0 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:21:49 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:21:49 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1601500382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:21:49 np0005540825 nova_compute[256151]: 2025-12-01 10:21:49.538 256155 DEBUG oslo_concurrency.processutils [None req-765d7d07-dd7c-4601-a771-c522f34f10c0 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:21:49 np0005540825 nova_compute[256151]: 2025-12-01 10:21:49.546 256155 DEBUG nova.compute.provider_tree [None req-765d7d07-dd7c-4601-a771-c522f34f10c0 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Inventory has not changed in ProviderTree for provider: 5efe20fe-1981-4bd9-8786-d9fddc89a5ae update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 05:21:49 np0005540825 nova_compute[256151]: 2025-12-01 10:21:49.568 256155 DEBUG nova.scheduler.client.report [None req-765d7d07-dd7c-4601-a771-c522f34f10c0 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Inventory has not changed for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 05:21:49 np0005540825 nova_compute[256151]: 2025-12-01 10:21:49.601 256155 DEBUG oslo_concurrency.lockutils [None req-765d7d07-dd7c-4601-a771-c522f34f10c0 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:21:49 np0005540825 nova_compute[256151]: 2025-12-01 10:21:49.646 256155 INFO nova.scheduler.client.report [None req-765d7d07-dd7c-4601-a771-c522f34f10c0 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Deleted allocations for instance f38af490-c2f2-4870-a0c3-c676494aad55#033[00m
Dec  1 05:21:49 np0005540825 nova_compute[256151]: 2025-12-01 10:21:49.730 256155 DEBUG oslo_concurrency.lockutils [None req-765d7d07-dd7c-4601-a771-c522f34f10c0 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "f38af490-c2f2-4870-a0c3-c676494aad55" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:21:49 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v936: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 11 KiB/s wr, 2 op/s
Dec  1 05:21:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:21:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:21:50.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:21:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:21:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:21:50.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:21:50 np0005540825 nova_compute[256151]: 2025-12-01 10:21:50.415 256155 DEBUG nova.compute.manager [req-147542ea-fef5-4b24-9a95-0eacf0562983 req-366358d0-0d87-41ad-ad12-32f01e0faa1f dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Received event network-vif-plugged-e5d534a7-8e7b-4873-8258-5fac7c090568 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:21:50 np0005540825 nova_compute[256151]: 2025-12-01 10:21:50.416 256155 DEBUG oslo_concurrency.lockutils [req-147542ea-fef5-4b24-9a95-0eacf0562983 req-366358d0-0d87-41ad-ad12-32f01e0faa1f dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "f38af490-c2f2-4870-a0c3-c676494aad55-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:21:50 np0005540825 nova_compute[256151]: 2025-12-01 10:21:50.417 256155 DEBUG oslo_concurrency.lockutils [req-147542ea-fef5-4b24-9a95-0eacf0562983 req-366358d0-0d87-41ad-ad12-32f01e0faa1f dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "f38af490-c2f2-4870-a0c3-c676494aad55-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:21:50 np0005540825 nova_compute[256151]: 2025-12-01 10:21:50.417 256155 DEBUG oslo_concurrency.lockutils [req-147542ea-fef5-4b24-9a95-0eacf0562983 req-366358d0-0d87-41ad-ad12-32f01e0faa1f dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "f38af490-c2f2-4870-a0c3-c676494aad55-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:21:50 np0005540825 nova_compute[256151]: 2025-12-01 10:21:50.417 256155 DEBUG nova.compute.manager [req-147542ea-fef5-4b24-9a95-0eacf0562983 req-366358d0-0d87-41ad-ad12-32f01e0faa1f dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] No waiting events found dispatching network-vif-plugged-e5d534a7-8e7b-4873-8258-5fac7c090568 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 05:21:50 np0005540825 nova_compute[256151]: 2025-12-01 10:21:50.418 256155 WARNING nova.compute.manager [req-147542ea-fef5-4b24-9a95-0eacf0562983 req-366358d0-0d87-41ad-ad12-32f01e0faa1f dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Received unexpected event network-vif-plugged-e5d534a7-8e7b-4873-8258-5fac7c090568 for instance with vm_state deleted and task_state None.#033[00m
Dec  1 05:21:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:21:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:21:51] "GET /metrics HTTP/1.1" 200 48554 "" "Prometheus/2.51.0"
Dec  1 05:21:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:21:51] "GET /metrics HTTP/1.1" 200 48554 "" "Prometheus/2.51.0"
Dec  1 05:21:51 np0005540825 nova_compute[256151]: 2025-12-01 10:21:51.596 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:21:51 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v937: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 15 KiB/s wr, 30 op/s
Dec  1 05:21:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:21:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:21:52.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:21:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:21:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:21:52.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:21:52 np0005540825 nova_compute[256151]: 2025-12-01 10:21:52.389 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:21:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:21:53.681Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:21:53 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v938: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 29 op/s
Dec  1 05:21:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:21:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:21:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:21:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:54 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:21:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:21:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:21:54.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:21:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:21:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:21:54.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:21:54 np0005540825 nova_compute[256151]: 2025-12-01 10:21:54.335 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:21:54 np0005540825 nova_compute[256151]: 2025-12-01 10:21:54.335 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:21:54 np0005540825 nova_compute[256151]: 2025-12-01 10:21:54.355 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:21:54 np0005540825 nova_compute[256151]: 2025-12-01 10:21:54.355 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 05:21:54 np0005540825 nova_compute[256151]: 2025-12-01 10:21:54.356 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 05:21:54 np0005540825 nova_compute[256151]: 2025-12-01 10:21:54.370 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 05:21:54 np0005540825 nova_compute[256151]: 2025-12-01 10:21:54.370 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:21:54 np0005540825 nova_compute[256151]: 2025-12-01 10:21:54.371 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:21:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:21:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:21:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:21:55 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v939: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 14 KiB/s wr, 30 op/s
Dec  1 05:21:56 np0005540825 nova_compute[256151]: 2025-12-01 10:21:56.026 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:21:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:21:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:21:56.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:21:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:21:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:21:56.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:21:56 np0005540825 nova_compute[256151]: 2025-12-01 10:21:56.642 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:21:57 np0005540825 nova_compute[256151]: 2025-12-01 10:21:57.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:21:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:21:57.239Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:21:57 np0005540825 nova_compute[256151]: 2025-12-01 10:21:57.391 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:21:57 np0005540825 nova_compute[256151]: 2025-12-01 10:21:57.603 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:21:57 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v940: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 4.5 KiB/s wr, 28 op/s
Dec  1 05:21:58 np0005540825 nova_compute[256151]: 2025-12-01 10:21:58.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:21:58 np0005540825 nova_compute[256151]: 2025-12-01 10:21:58.027 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 05:21:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:21:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:21:58.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:21:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:21:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:21:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:21:58.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:21:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:21:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:21:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:21:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:21:59 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:21:59 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v941: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 4.5 KiB/s wr, 28 op/s
Dec  1 05:22:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:22:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:22:00.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:22:00 np0005540825 podman[272721]: 2025-12-01 10:22:00.23937525 +0000 UTC m=+0.102135313 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:22:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:22:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:22:00.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:22:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:22:01 np0005540825 nova_compute[256151]: 2025-12-01 10:22:01.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:22:01 np0005540825 nova_compute[256151]: 2025-12-01 10:22:01.083 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:22:01 np0005540825 nova_compute[256151]: 2025-12-01 10:22:01.083 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:22:01 np0005540825 nova_compute[256151]: 2025-12-01 10:22:01.083 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:22:01 np0005540825 nova_compute[256151]: 2025-12-01 10:22:01.084 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 05:22:01 np0005540825 nova_compute[256151]: 2025-12-01 10:22:01.084 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:22:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:22:01] "GET /metrics HTTP/1.1" 200 48558 "" "Prometheus/2.51.0"
Dec  1 05:22:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:22:01] "GET /metrics HTTP/1.1" 200 48558 "" "Prometheus/2.51.0"
Dec  1 05:22:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:22:01 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/596614569' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:22:01 np0005540825 nova_compute[256151]: 2025-12-01 10:22:01.549 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:22:01 np0005540825 nova_compute[256151]: 2025-12-01 10:22:01.644 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:22:01 np0005540825 nova_compute[256151]: 2025-12-01 10:22:01.765 256155 WARNING nova.virt.libvirt.driver [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 05:22:01 np0005540825 nova_compute[256151]: 2025-12-01 10:22:01.768 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4580MB free_disk=59.94247817993164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 05:22:01 np0005540825 nova_compute[256151]: 2025-12-01 10:22:01.768 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:22:01 np0005540825 nova_compute[256151]: 2025-12-01 10:22:01.768 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:22:01 np0005540825 nova_compute[256151]: 2025-12-01 10:22:01.849 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 05:22:01 np0005540825 nova_compute[256151]: 2025-12-01 10:22:01.849 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 05:22:01 np0005540825 nova_compute[256151]: 2025-12-01 10:22:01.875 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:22:01 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v942: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 5.7 KiB/s wr, 56 op/s
Dec  1 05:22:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:22:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:22:02.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:22:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:22:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:22:02.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:22:02 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:22:02 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/427530599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:22:02 np0005540825 nova_compute[256151]: 2025-12-01 10:22:02.361 256155 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764584507.3598247, f38af490-c2f2-4870-a0c3-c676494aad55 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 05:22:02 np0005540825 nova_compute[256151]: 2025-12-01 10:22:02.362 256155 INFO nova.compute.manager [-] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] VM Stopped (Lifecycle Event)#033[00m
Dec  1 05:22:02 np0005540825 nova_compute[256151]: 2025-12-01 10:22:02.372 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:22:02 np0005540825 nova_compute[256151]: 2025-12-01 10:22:02.380 256155 DEBUG nova.compute.provider_tree [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed in ProviderTree for provider: 5efe20fe-1981-4bd9-8786-d9fddc89a5ae update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 05:22:02 np0005540825 nova_compute[256151]: 2025-12-01 10:22:02.388 256155 DEBUG nova.compute.manager [None req-4b7cba1e-0829-45a3-a4aa-f25ec385a38a - - - - - -] [instance: f38af490-c2f2-4870-a0c3-c676494aad55] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 05:22:02 np0005540825 nova_compute[256151]: 2025-12-01 10:22:02.393 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:22:02 np0005540825 nova_compute[256151]: 2025-12-01 10:22:02.412 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 05:22:02 np0005540825 nova_compute[256151]: 2025-12-01 10:22:02.446 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 05:22:02 np0005540825 nova_compute[256151]: 2025-12-01 10:22:02.446 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:22:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:22:03.682Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:22:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:22:03.682Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:22:03 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v943: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Dec  1 05:22:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:22:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:22:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:22:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:04 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:22:04 np0005540825 podman[272790]: 2025-12-01 10:22:04.206783291 +0000 UTC m=+0.069407200 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125)
Dec  1 05:22:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.003000079s ======
Dec  1 05:22:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:22:04.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000079s
Dec  1 05:22:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:22:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:22:04.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:22:04 np0005540825 nova_compute[256151]: 2025-12-01 10:22:04.447 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:22:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:22:04.579 163291 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:22:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:22:04.580 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:22:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:22:04.580 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:22:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:22:05 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v944: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Dec  1 05:22:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:22:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:22:06.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:22:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:22:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:22:06.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:22:06 np0005540825 nova_compute[256151]: 2025-12-01 10:22:06.646 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:22:07 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  1 05:22:07 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2051891042' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  1 05:22:07 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  1 05:22:07 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2051891042' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  1 05:22:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:22:07.241Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:22:07 np0005540825 nova_compute[256151]: 2025-12-01 10:22:07.396 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:22:07 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v945: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec  1 05:22:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:22:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:22:08.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:22:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:22:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:22:08.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:22:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:22:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:22:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:22:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:09 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:22:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:22:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:22:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:22:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:22:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:22:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:22:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:22:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:22:09 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v946: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec  1 05:22:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:22:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:22:10.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:22:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:22:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:22:10.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:22:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:22:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:22:11] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:22:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:22:11] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:22:11 np0005540825 nova_compute[256151]: 2025-12-01 10:22:11.673 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:22:11 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v947: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec  1 05:22:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:22:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:22:12.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:22:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:22:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:22:12.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:22:12 np0005540825 nova_compute[256151]: 2025-12-01 10:22:12.398 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:22:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:22:13.684Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:22:13 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v948: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:22:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:22:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:22:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:22:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:14 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:22:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:22:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:22:14.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:22:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:22:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:22:14.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:22:15 np0005540825 podman[272846]: 2025-12-01 10:22:15.267614907 +0000 UTC m=+0.132523524 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, config_id=ovn_controller)
Dec  1 05:22:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:22:15 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v949: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:22:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:22:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:22:16.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:22:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:22:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:22:16.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:22:16 np0005540825 nova_compute[256151]: 2025-12-01 10:22:16.710 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:22:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:22:17.242Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:22:17 np0005540825 nova_compute[256151]: 2025-12-01 10:22:17.400 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:22:17 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v950: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:22:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:22:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:22:18.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:22:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:22:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:22:18.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:22:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:22:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:22:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:22:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:19 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:22:19 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v951: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:22:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.003000078s ======
Dec  1 05:22:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:22:20.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000078s
Dec  1 05:22:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:22:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:22:20.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:22:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:22:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:22:21] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:22:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:22:21] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:22:21 np0005540825 nova_compute[256151]: 2025-12-01 10:22:21.713 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:22:21 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v952: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:22:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:22:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:22:22.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:22:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:22:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:22:22.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:22:22 np0005540825 nova_compute[256151]: 2025-12-01 10:22:22.402 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:22:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:22:23.684Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:22:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:22:23.685Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:22:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:22:23.685Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:22:23 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v953: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:22:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:22:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:22:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:24 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:22:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:24 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:22:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:22:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:22:24.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:22:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:22:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:22:24.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:22:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:22:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:22:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:22:25 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v954: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:22:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:22:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:22:26.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:22:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:22:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:22:26.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:22:26 np0005540825 nova_compute[256151]: 2025-12-01 10:22:26.717 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:22:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:22:27.243Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:22:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:22:27.243Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:22:27 np0005540825 nova_compute[256151]: 2025-12-01 10:22:27.404 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:22:27 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v955: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:22:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:22:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:22:28.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:22:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:22:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:22:28.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:22:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:22:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:29 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:22:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:29 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:22:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:29 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:22:29 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v956: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:22:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:22:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:22:30.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:22:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:22:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:22:30.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:22:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:22:31 np0005540825 podman[272916]: 2025-12-01 10:22:31.226958738 +0000 UTC m=+0.084295017 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 05:22:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:22:31] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Dec  1 05:22:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:22:31] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Dec  1 05:22:31 np0005540825 nova_compute[256151]: 2025-12-01 10:22:31.757 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:22:31 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v957: 353 pgs: 353 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  1 05:22:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:22:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:22:32.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:22:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:22:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:22:32.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:22:32 np0005540825 nova_compute[256151]: 2025-12-01 10:22:32.407 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:22:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:22:33.686Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:22:33 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v958: 353 pgs: 353 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  1 05:22:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:22:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:22:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:22:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:34 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:22:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:22:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:22:34.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:22:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:22:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:22:34.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:22:35 np0005540825 podman[272940]: 2025-12-01 10:22:35.241704182 +0000 UTC m=+0.091951107 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125)
Dec  1 05:22:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:22:35 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v959: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 878 KiB/s rd, 1.8 MiB/s wr, 66 op/s
Dec  1 05:22:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:22:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:22:36.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:22:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:22:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:22:36.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:22:36 np0005540825 nova_compute[256151]: 2025-12-01 10:22:36.760 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:22:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:22:37.244Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:22:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:22:37.244Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:22:37 np0005540825 nova_compute[256151]: 2025-12-01 10:22:37.409 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:22:37 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v960: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 878 KiB/s rd, 1.8 MiB/s wr, 65 op/s
Dec  1 05:22:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:22:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:22:38.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:22:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:22:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:22:38.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:22:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:22:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:22:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:22:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:39 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:22:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_10:22:39
Dec  1 05:22:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 05:22:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 05:22:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['default.rgw.log', 'backups', 'vms', '.rgw.root', '.nfs', 'volumes', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'default.rgw.control']
Dec  1 05:22:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 05:22:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:22:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:22:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:22:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:22:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:22:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:22:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:22:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:22:39 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v961: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 878 KiB/s rd, 1.8 MiB/s wr, 65 op/s
Dec  1 05:22:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 05:22:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:22:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:22:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:22:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Dec  1 05:22:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:22:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:22:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:22:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:22:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:22:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  1 05:22:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:22:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:22:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:22:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:22:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:22:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:22:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:22:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:22:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:22:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:22:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:22:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:22:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:22:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:22:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 05:22:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:22:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 05:22:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:22:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:22:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:22:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:22:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:22:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:22:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:22:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:22:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:22:40.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:22:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:22:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:22:40.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:22:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:22:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:22:41] "GET /metrics HTTP/1.1" 200 48563 "" "Prometheus/2.51.0"
Dec  1 05:22:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:22:41] "GET /metrics HTTP/1.1" 200 48563 "" "Prometheus/2.51.0"
Dec  1 05:22:41 np0005540825 ceph-mgr[74709]: [devicehealth INFO root] Check health
Dec  1 05:22:41 np0005540825 nova_compute[256151]: 2025-12-01 10:22:41.799 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:22:41 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v962: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec  1 05:22:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:22:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:22:42.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:22:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:22:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:22:42.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:22:42 np0005540825 nova_compute[256151]: 2025-12-01 10:22:42.412 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:22:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:22:43.687Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:22:43 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v963: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec  1 05:22:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:22:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:22:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:22:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:44 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:22:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:22:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:22:44.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:22:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:22:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:22:44.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:22:44 np0005540825 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec  1 05:22:44 np0005540825 ovn_controller[153404]: 2025-12-01T10:22:44Z|00059|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec  1 05:22:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:22:45 np0005540825 podman[272997]: 2025-12-01 10:22:45.941214265 +0000 UTC m=+0.078203569 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 05:22:45 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v964: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Dec  1 05:22:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:22:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:22:46.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:22:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:22:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:22:46.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:22:46 np0005540825 nova_compute[256151]: 2025-12-01 10:22:46.801 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:22:47 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  1 05:22:47 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  1 05:22:47 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:22:47 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  1 05:22:47 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:22:47 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  1 05:22:47 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:22:47 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:22:47 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 05:22:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:22:47.245Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:22:47 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:22:47 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 05:22:47 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:22:47 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:22:47 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:22:47 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 05:22:47 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:22:47 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 05:22:47 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:22:47 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 05:22:47 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:22:47 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 05:22:47 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 05:22:47 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 05:22:47 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:22:47 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:22:47 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:22:47 np0005540825 nova_compute[256151]: 2025-12-01 10:22:47.413 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:22:47 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v965: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.2 KiB/s wr, 62 op/s
Dec  1 05:22:47 np0005540825 podman[273271]: 2025-12-01 10:22:47.979377213 +0000 UTC m=+0.063737041 container create 9acdf8ffdb31b26ad72481ceff4e84a952c8788ab5a562c36b98dbd0d9deac36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_knuth, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:22:48 np0005540825 systemd[1]: Started libpod-conmon-9acdf8ffdb31b26ad72481ceff4e84a952c8788ab5a562c36b98dbd0d9deac36.scope.
Dec  1 05:22:48 np0005540825 podman[273271]: 2025-12-01 10:22:47.956713783 +0000 UTC m=+0.041073641 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:22:48 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:22:48 np0005540825 podman[273271]: 2025-12-01 10:22:48.068140406 +0000 UTC m=+0.152500264 container init 9acdf8ffdb31b26ad72481ceff4e84a952c8788ab5a562c36b98dbd0d9deac36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:22:48 np0005540825 podman[273271]: 2025-12-01 10:22:48.077974383 +0000 UTC m=+0.162334191 container start 9acdf8ffdb31b26ad72481ceff4e84a952c8788ab5a562c36b98dbd0d9deac36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_knuth, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec  1 05:22:48 np0005540825 podman[273271]: 2025-12-01 10:22:48.080796576 +0000 UTC m=+0.165156404 container attach 9acdf8ffdb31b26ad72481ceff4e84a952c8788ab5a562c36b98dbd0d9deac36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  1 05:22:48 np0005540825 confident_knuth[273288]: 167 167
Dec  1 05:22:48 np0005540825 systemd[1]: libpod-9acdf8ffdb31b26ad72481ceff4e84a952c8788ab5a562c36b98dbd0d9deac36.scope: Deactivated successfully.
Dec  1 05:22:48 np0005540825 conmon[273288]: conmon 9acdf8ffdb31b26ad724 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9acdf8ffdb31b26ad72481ceff4e84a952c8788ab5a562c36b98dbd0d9deac36.scope/container/memory.events
Dec  1 05:22:48 np0005540825 podman[273271]: 2025-12-01 10:22:48.088584749 +0000 UTC m=+0.172944607 container died 9acdf8ffdb31b26ad72481ceff4e84a952c8788ab5a562c36b98dbd0d9deac36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:22:48 np0005540825 systemd[1]: var-lib-containers-storage-overlay-cb3d6b3cbd6df67b3889ec10a2058b30bab87b9fed9c927750320da242112697-merged.mount: Deactivated successfully.
Dec  1 05:22:48 np0005540825 podman[273271]: 2025-12-01 10:22:48.140752108 +0000 UTC m=+0.225111966 container remove 9acdf8ffdb31b26ad72481ceff4e84a952c8788ab5a562c36b98dbd0d9deac36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:22:48 np0005540825 systemd[1]: libpod-conmon-9acdf8ffdb31b26ad72481ceff4e84a952c8788ab5a562c36b98dbd0d9deac36.scope: Deactivated successfully.
Dec  1 05:22:48 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:22:48 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:22:48 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:22:48 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:22:48 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:22:48 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:22:48 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:22:48 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:22:48 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:22:48 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:22:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:22:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:22:48.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:22:48 np0005540825 podman[273311]: 2025-12-01 10:22:48.318713646 +0000 UTC m=+0.047987412 container create 735affd2519fa0f2d1218dd3a9bdf3ec7600df601684bd1dc512f410689f51f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_turing, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  1 05:22:48 np0005540825 systemd[1]: Started libpod-conmon-735affd2519fa0f2d1218dd3a9bdf3ec7600df601684bd1dc512f410689f51f5.scope.
Dec  1 05:22:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:22:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:22:48.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:22:48 np0005540825 podman[273311]: 2025-12-01 10:22:48.295594243 +0000 UTC m=+0.024868029 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:22:48 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:22:48 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f22533e6252fc9f6e07a900cb5b564b1b14c95fe46f0b95319501a649d133a5b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:22:48 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f22533e6252fc9f6e07a900cb5b564b1b14c95fe46f0b95319501a649d133a5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:22:48 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f22533e6252fc9f6e07a900cb5b564b1b14c95fe46f0b95319501a649d133a5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:22:48 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f22533e6252fc9f6e07a900cb5b564b1b14c95fe46f0b95319501a649d133a5b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:22:48 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f22533e6252fc9f6e07a900cb5b564b1b14c95fe46f0b95319501a649d133a5b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:22:48 np0005540825 podman[273311]: 2025-12-01 10:22:48.426556086 +0000 UTC m=+0.155829872 container init 735affd2519fa0f2d1218dd3a9bdf3ec7600df601684bd1dc512f410689f51f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_turing, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:22:48 np0005540825 podman[273311]: 2025-12-01 10:22:48.434121453 +0000 UTC m=+0.163395209 container start 735affd2519fa0f2d1218dd3a9bdf3ec7600df601684bd1dc512f410689f51f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_turing, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec  1 05:22:48 np0005540825 podman[273311]: 2025-12-01 10:22:48.43707433 +0000 UTC m=+0.166348086 container attach 735affd2519fa0f2d1218dd3a9bdf3ec7600df601684bd1dc512f410689f51f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_turing, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  1 05:22:48 np0005540825 pedantic_turing[273327]: --> passed data devices: 0 physical, 1 LVM
Dec  1 05:22:48 np0005540825 pedantic_turing[273327]: --> All data devices are unavailable
Dec  1 05:22:48 np0005540825 systemd[1]: libpod-735affd2519fa0f2d1218dd3a9bdf3ec7600df601684bd1dc512f410689f51f5.scope: Deactivated successfully.
Dec  1 05:22:48 np0005540825 podman[273311]: 2025-12-01 10:22:48.766676509 +0000 UTC m=+0.495950265 container died 735affd2519fa0f2d1218dd3a9bdf3ec7600df601684bd1dc512f410689f51f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  1 05:22:48 np0005540825 systemd[1]: var-lib-containers-storage-overlay-f22533e6252fc9f6e07a900cb5b564b1b14c95fe46f0b95319501a649d133a5b-merged.mount: Deactivated successfully.
Dec  1 05:22:48 np0005540825 podman[273311]: 2025-12-01 10:22:48.816167418 +0000 UTC m=+0.545441164 container remove 735affd2519fa0f2d1218dd3a9bdf3ec7600df601684bd1dc512f410689f51f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec  1 05:22:48 np0005540825 systemd[1]: libpod-conmon-735affd2519fa0f2d1218dd3a9bdf3ec7600df601684bd1dc512f410689f51f5.scope: Deactivated successfully.
Dec  1 05:22:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:22:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:49 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:22:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:49 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:22:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:49 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:22:49 np0005540825 podman[273448]: 2025-12-01 10:22:49.555778931 +0000 UTC m=+0.072947232 container create ede5bdaf212bb01fd9726a31c4571327b31185c9cd2024a7f833acc9d1aaaffc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_montalcini, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  1 05:22:49 np0005540825 systemd[1]: Started libpod-conmon-ede5bdaf212bb01fd9726a31c4571327b31185c9cd2024a7f833acc9d1aaaffc.scope.
Dec  1 05:22:49 np0005540825 podman[273448]: 2025-12-01 10:22:49.526508208 +0000 UTC m=+0.043676569 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:22:49 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:22:49 np0005540825 podman[273448]: 2025-12-01 10:22:49.657852301 +0000 UTC m=+0.175020672 container init ede5bdaf212bb01fd9726a31c4571327b31185c9cd2024a7f833acc9d1aaaffc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Dec  1 05:22:49 np0005540825 podman[273448]: 2025-12-01 10:22:49.668511748 +0000 UTC m=+0.185680049 container start ede5bdaf212bb01fd9726a31c4571327b31185c9cd2024a7f833acc9d1aaaffc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  1 05:22:49 np0005540825 podman[273448]: 2025-12-01 10:22:49.672586675 +0000 UTC m=+0.189754986 container attach ede5bdaf212bb01fd9726a31c4571327b31185c9cd2024a7f833acc9d1aaaffc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_montalcini, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:22:49 np0005540825 nice_montalcini[273464]: 167 167
Dec  1 05:22:49 np0005540825 systemd[1]: libpod-ede5bdaf212bb01fd9726a31c4571327b31185c9cd2024a7f833acc9d1aaaffc.scope: Deactivated successfully.
Dec  1 05:22:49 np0005540825 podman[273448]: 2025-12-01 10:22:49.67546809 +0000 UTC m=+0.192636431 container died ede5bdaf212bb01fd9726a31c4571327b31185c9cd2024a7f833acc9d1aaaffc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_montalcini, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:22:49 np0005540825 systemd[1]: var-lib-containers-storage-overlay-922b672eb2b36a7556b33a2c1a3b11461bb7bae83de29231e5e1b1769a0249f4-merged.mount: Deactivated successfully.
Dec  1 05:22:49 np0005540825 podman[273448]: 2025-12-01 10:22:49.724780835 +0000 UTC m=+0.241949136 container remove ede5bdaf212bb01fd9726a31c4571327b31185c9cd2024a7f833acc9d1aaaffc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:22:49 np0005540825 systemd[1]: libpod-conmon-ede5bdaf212bb01fd9726a31c4571327b31185c9cd2024a7f833acc9d1aaaffc.scope: Deactivated successfully.
Dec  1 05:22:49 np0005540825 podman[273489]: 2025-12-01 10:22:49.940420154 +0000 UTC m=+0.061477923 container create dbc3b24127e78b2898eb9c14e02dbfe521b6c87874f5d8454bd8e86e998119da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_khayyam, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:22:49 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v966: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.2 KiB/s wr, 62 op/s
Dec  1 05:22:49 np0005540825 systemd[1]: Started libpod-conmon-dbc3b24127e78b2898eb9c14e02dbfe521b6c87874f5d8454bd8e86e998119da.scope.
Dec  1 05:22:50 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:22:50 np0005540825 podman[273489]: 2025-12-01 10:22:49.919496109 +0000 UTC m=+0.040553888 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:22:50 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b30ecc89d7bbb40727a16719fc0bb18958bcff5dd83c59c8a3527c34c49d69d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:22:50 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b30ecc89d7bbb40727a16719fc0bb18958bcff5dd83c59c8a3527c34c49d69d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:22:50 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b30ecc89d7bbb40727a16719fc0bb18958bcff5dd83c59c8a3527c34c49d69d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:22:50 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b30ecc89d7bbb40727a16719fc0bb18958bcff5dd83c59c8a3527c34c49d69d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:22:50 np0005540825 podman[273489]: 2025-12-01 10:22:50.032145214 +0000 UTC m=+0.153202963 container init dbc3b24127e78b2898eb9c14e02dbfe521b6c87874f5d8454bd8e86e998119da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  1 05:22:50 np0005540825 podman[273489]: 2025-12-01 10:22:50.038359245 +0000 UTC m=+0.159416974 container start dbc3b24127e78b2898eb9c14e02dbfe521b6c87874f5d8454bd8e86e998119da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_khayyam, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:22:50 np0005540825 podman[273489]: 2025-12-01 10:22:50.041472976 +0000 UTC m=+0.162530745 container attach dbc3b24127e78b2898eb9c14e02dbfe521b6c87874f5d8454bd8e86e998119da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_khayyam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  1 05:22:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:22:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:22:50.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:22:50 np0005540825 zealous_khayyam[273505]: {
Dec  1 05:22:50 np0005540825 zealous_khayyam[273505]:    "1": [
Dec  1 05:22:50 np0005540825 zealous_khayyam[273505]:        {
Dec  1 05:22:50 np0005540825 zealous_khayyam[273505]:            "devices": [
Dec  1 05:22:50 np0005540825 zealous_khayyam[273505]:                "/dev/loop3"
Dec  1 05:22:50 np0005540825 zealous_khayyam[273505]:            ],
Dec  1 05:22:50 np0005540825 zealous_khayyam[273505]:            "lv_name": "ceph_lv0",
Dec  1 05:22:50 np0005540825 zealous_khayyam[273505]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:22:50 np0005540825 zealous_khayyam[273505]:            "lv_size": "21470642176",
Dec  1 05:22:50 np0005540825 zealous_khayyam[273505]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 05:22:50 np0005540825 zealous_khayyam[273505]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:22:50 np0005540825 zealous_khayyam[273505]:            "name": "ceph_lv0",
Dec  1 05:22:50 np0005540825 zealous_khayyam[273505]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:22:50 np0005540825 zealous_khayyam[273505]:            "tags": {
Dec  1 05:22:50 np0005540825 zealous_khayyam[273505]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:22:50 np0005540825 zealous_khayyam[273505]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:22:50 np0005540825 zealous_khayyam[273505]:                "ceph.cephx_lockbox_secret": "",
Dec  1 05:22:50 np0005540825 zealous_khayyam[273505]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 05:22:50 np0005540825 zealous_khayyam[273505]:                "ceph.cluster_name": "ceph",
Dec  1 05:22:50 np0005540825 zealous_khayyam[273505]:                "ceph.crush_device_class": "",
Dec  1 05:22:50 np0005540825 zealous_khayyam[273505]:                "ceph.encrypted": "0",
Dec  1 05:22:50 np0005540825 zealous_khayyam[273505]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 05:22:50 np0005540825 zealous_khayyam[273505]:                "ceph.osd_id": "1",
Dec  1 05:22:50 np0005540825 zealous_khayyam[273505]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 05:22:50 np0005540825 zealous_khayyam[273505]:                "ceph.type": "block",
Dec  1 05:22:50 np0005540825 zealous_khayyam[273505]:                "ceph.vdo": "0",
Dec  1 05:22:50 np0005540825 zealous_khayyam[273505]:                "ceph.with_tpm": "0"
Dec  1 05:22:50 np0005540825 zealous_khayyam[273505]:            },
Dec  1 05:22:50 np0005540825 zealous_khayyam[273505]:            "type": "block",
Dec  1 05:22:50 np0005540825 zealous_khayyam[273505]:            "vg_name": "ceph_vg0"
Dec  1 05:22:50 np0005540825 zealous_khayyam[273505]:        }
Dec  1 05:22:50 np0005540825 zealous_khayyam[273505]:    ]
Dec  1 05:22:50 np0005540825 zealous_khayyam[273505]: }
Dec  1 05:22:50 np0005540825 systemd[1]: libpod-dbc3b24127e78b2898eb9c14e02dbfe521b6c87874f5d8454bd8e86e998119da.scope: Deactivated successfully.
Dec  1 05:22:50 np0005540825 podman[273489]: 2025-12-01 10:22:50.359132074 +0000 UTC m=+0.480189813 container died dbc3b24127e78b2898eb9c14e02dbfe521b6c87874f5d8454bd8e86e998119da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_khayyam, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:22:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:22:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:22:50.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:22:50 np0005540825 systemd[1]: var-lib-containers-storage-overlay-b30ecc89d7bbb40727a16719fc0bb18958bcff5dd83c59c8a3527c34c49d69d0-merged.mount: Deactivated successfully.
Dec  1 05:22:50 np0005540825 podman[273489]: 2025-12-01 10:22:50.405162953 +0000 UTC m=+0.526220682 container remove dbc3b24127e78b2898eb9c14e02dbfe521b6c87874f5d8454bd8e86e998119da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:22:50 np0005540825 systemd[1]: libpod-conmon-dbc3b24127e78b2898eb9c14e02dbfe521b6c87874f5d8454bd8e86e998119da.scope: Deactivated successfully.
Dec  1 05:22:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:22:51 np0005540825 podman[273618]: 2025-12-01 10:22:51.030066897 +0000 UTC m=+0.051454302 container create e748c494b4ae5a2932ef9ddeca8a74b9eb470f4a2e004f066b6899c2ccedc8d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 05:22:51 np0005540825 systemd[1]: Started libpod-conmon-e748c494b4ae5a2932ef9ddeca8a74b9eb470f4a2e004f066b6899c2ccedc8d6.scope.
Dec  1 05:22:51 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:22:51 np0005540825 podman[273618]: 2025-12-01 10:22:51.007626862 +0000 UTC m=+0.029014247 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:22:51 np0005540825 podman[273618]: 2025-12-01 10:22:51.117893825 +0000 UTC m=+0.139281260 container init e748c494b4ae5a2932ef9ddeca8a74b9eb470f4a2e004f066b6899c2ccedc8d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_fermat, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  1 05:22:51 np0005540825 podman[273618]: 2025-12-01 10:22:51.125412221 +0000 UTC m=+0.146799616 container start e748c494b4ae5a2932ef9ddeca8a74b9eb470f4a2e004f066b6899c2ccedc8d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_fermat, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:22:51 np0005540825 podman[273618]: 2025-12-01 10:22:51.129478527 +0000 UTC m=+0.150865932 container attach e748c494b4ae5a2932ef9ddeca8a74b9eb470f4a2e004f066b6899c2ccedc8d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_fermat, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec  1 05:22:51 np0005540825 vigilant_fermat[273634]: 167 167
Dec  1 05:22:51 np0005540825 systemd[1]: libpod-e748c494b4ae5a2932ef9ddeca8a74b9eb470f4a2e004f066b6899c2ccedc8d6.scope: Deactivated successfully.
Dec  1 05:22:51 np0005540825 podman[273618]: 2025-12-01 10:22:51.132433204 +0000 UTC m=+0.153820619 container died e748c494b4ae5a2932ef9ddeca8a74b9eb470f4a2e004f066b6899c2ccedc8d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_fermat, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec  1 05:22:51 np0005540825 systemd[1]: var-lib-containers-storage-overlay-f7c3ee3f9b7c2b9e2cb81678bb21d2ea2695bfb18802a6760e87c25f4e9ad77c-merged.mount: Deactivated successfully.
Dec  1 05:22:51 np0005540825 podman[273618]: 2025-12-01 10:22:51.179711426 +0000 UTC m=+0.201098821 container remove e748c494b4ae5a2932ef9ddeca8a74b9eb470f4a2e004f066b6899c2ccedc8d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_fermat, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec  1 05:22:51 np0005540825 systemd[1]: libpod-conmon-e748c494b4ae5a2932ef9ddeca8a74b9eb470f4a2e004f066b6899c2ccedc8d6.scope: Deactivated successfully.
Dec  1 05:22:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:22:51] "GET /metrics HTTP/1.1" 200 48563 "" "Prometheus/2.51.0"
Dec  1 05:22:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:22:51] "GET /metrics HTTP/1.1" 200 48563 "" "Prometheus/2.51.0"
Dec  1 05:22:51 np0005540825 podman[273658]: 2025-12-01 10:22:51.42200839 +0000 UTC m=+0.061925555 container create 121028e40b1346a9587261a28a4ed77468e6d4f0969af916afeaa8615dbeac7c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_hugle, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:22:51 np0005540825 systemd[1]: Started libpod-conmon-121028e40b1346a9587261a28a4ed77468e6d4f0969af916afeaa8615dbeac7c.scope.
Dec  1 05:22:51 np0005540825 podman[273658]: 2025-12-01 10:22:51.399975676 +0000 UTC m=+0.039892821 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:22:51 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:22:51 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44410e71eaac671355ed69304ece35dbeb1f00b51cbae82a928f2e5d0a05b93b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:22:51 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44410e71eaac671355ed69304ece35dbeb1f00b51cbae82a928f2e5d0a05b93b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:22:51 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44410e71eaac671355ed69304ece35dbeb1f00b51cbae82a928f2e5d0a05b93b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:22:51 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44410e71eaac671355ed69304ece35dbeb1f00b51cbae82a928f2e5d0a05b93b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:22:51 np0005540825 podman[273658]: 2025-12-01 10:22:51.522088237 +0000 UTC m=+0.162005382 container init 121028e40b1346a9587261a28a4ed77468e6d4f0969af916afeaa8615dbeac7c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:22:51 np0005540825 podman[273658]: 2025-12-01 10:22:51.533259679 +0000 UTC m=+0.173176834 container start 121028e40b1346a9587261a28a4ed77468e6d4f0969af916afeaa8615dbeac7c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec  1 05:22:51 np0005540825 podman[273658]: 2025-12-01 10:22:51.538600258 +0000 UTC m=+0.178517413 container attach 121028e40b1346a9587261a28a4ed77468e6d4f0969af916afeaa8615dbeac7c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:22:51 np0005540825 nova_compute[256151]: 2025-12-01 10:22:51.803 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:22:51 np0005540825 nova_compute[256151]: 2025-12-01 10:22:51.946 256155 DEBUG oslo_concurrency.lockutils [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "522533d9-3ad6-4908-822e-02ea690da2e7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:22:51 np0005540825 nova_compute[256151]: 2025-12-01 10:22:51.947 256155 DEBUG oslo_concurrency.lockutils [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "522533d9-3ad6-4908-822e-02ea690da2e7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:22:51 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v967: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.2 KiB/s wr, 62 op/s
Dec  1 05:22:52 np0005540825 nova_compute[256151]: 2025-12-01 10:22:52.001 256155 DEBUG nova.compute.manager [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 05:22:52 np0005540825 nova_compute[256151]: 2025-12-01 10:22:52.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:22:52 np0005540825 nova_compute[256151]: 2025-12-01 10:22:52.027 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 05:22:52 np0005540825 nova_compute[256151]: 2025-12-01 10:22:52.028 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 05:22:52 np0005540825 nova_compute[256151]: 2025-12-01 10:22:52.109 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 05:22:52 np0005540825 nova_compute[256151]: 2025-12-01 10:22:52.137 256155 DEBUG oslo_concurrency.lockutils [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:22:52 np0005540825 nova_compute[256151]: 2025-12-01 10:22:52.137 256155 DEBUG oslo_concurrency.lockutils [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:22:52 np0005540825 nova_compute[256151]: 2025-12-01 10:22:52.145 256155 DEBUG nova.virt.hardware [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 05:22:52 np0005540825 nova_compute[256151]: 2025-12-01 10:22:52.145 256155 INFO nova.compute.claims [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 05:22:52 np0005540825 lvm[273751]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 05:22:52 np0005540825 lvm[273751]: VG ceph_vg0 finished
Dec  1 05:22:52 np0005540825 gracious_hugle[273674]: {}
Dec  1 05:22:52 np0005540825 systemd[1]: libpod-121028e40b1346a9587261a28a4ed77468e6d4f0969af916afeaa8615dbeac7c.scope: Deactivated successfully.
Dec  1 05:22:52 np0005540825 systemd[1]: libpod-121028e40b1346a9587261a28a4ed77468e6d4f0969af916afeaa8615dbeac7c.scope: Consumed 1.065s CPU time.
Dec  1 05:22:52 np0005540825 podman[273658]: 2025-12-01 10:22:52.222105718 +0000 UTC m=+0.862022833 container died 121028e40b1346a9587261a28a4ed77468e6d4f0969af916afeaa8615dbeac7c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:22:52 np0005540825 systemd[1]: var-lib-containers-storage-overlay-44410e71eaac671355ed69304ece35dbeb1f00b51cbae82a928f2e5d0a05b93b-merged.mount: Deactivated successfully.
Dec  1 05:22:52 np0005540825 podman[273658]: 2025-12-01 10:22:52.261367831 +0000 UTC m=+0.901284956 container remove 121028e40b1346a9587261a28a4ed77468e6d4f0969af916afeaa8615dbeac7c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_hugle, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  1 05:22:52 np0005540825 systemd[1]: libpod-conmon-121028e40b1346a9587261a28a4ed77468e6d4f0969af916afeaa8615dbeac7c.scope: Deactivated successfully.
Dec  1 05:22:52 np0005540825 nova_compute[256151]: 2025-12-01 10:22:52.279 256155 DEBUG oslo_concurrency.processutils [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:22:52 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 05:22:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:22:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:22:52.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:22:52 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:22:52 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 05:22:52 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:22:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:22:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:22:52.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:22:52 np0005540825 nova_compute[256151]: 2025-12-01 10:22:52.415 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:22:52 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:22:52 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4254081221' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:22:52 np0005540825 nova_compute[256151]: 2025-12-01 10:22:52.768 256155 DEBUG oslo_concurrency.processutils [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:22:52 np0005540825 nova_compute[256151]: 2025-12-01 10:22:52.774 256155 DEBUG nova.compute.provider_tree [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Inventory has not changed in ProviderTree for provider: 5efe20fe-1981-4bd9-8786-d9fddc89a5ae update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 05:22:52 np0005540825 nova_compute[256151]: 2025-12-01 10:22:52.803 256155 DEBUG nova.scheduler.client.report [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Inventory has not changed for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 05:22:52 np0005540825 nova_compute[256151]: 2025-12-01 10:22:52.838 256155 DEBUG oslo_concurrency.lockutils [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:22:52 np0005540825 nova_compute[256151]: 2025-12-01 10:22:52.838 256155 DEBUG nova.compute.manager [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 05:22:52 np0005540825 nova_compute[256151]: 2025-12-01 10:22:52.900 256155 DEBUG nova.compute.manager [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 05:22:52 np0005540825 nova_compute[256151]: 2025-12-01 10:22:52.900 256155 DEBUG nova.network.neutron [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 05:22:52 np0005540825 nova_compute[256151]: 2025-12-01 10:22:52.943 256155 INFO nova.virt.libvirt.driver [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 05:22:52 np0005540825 nova_compute[256151]: 2025-12-01 10:22:52.972 256155 DEBUG nova.compute.manager [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 05:22:53 np0005540825 nova_compute[256151]: 2025-12-01 10:22:53.124 256155 DEBUG nova.compute.manager [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 05:22:53 np0005540825 nova_compute[256151]: 2025-12-01 10:22:53.125 256155 DEBUG nova.virt.libvirt.driver [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 05:22:53 np0005540825 nova_compute[256151]: 2025-12-01 10:22:53.126 256155 INFO nova.virt.libvirt.driver [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Creating image(s)#033[00m
Dec  1 05:22:53 np0005540825 nova_compute[256151]: 2025-12-01 10:22:53.160 256155 DEBUG nova.storage.rbd_utils [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] rbd image 522533d9-3ad6-4908-822e-02ea690da2e7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  1 05:22:53 np0005540825 nova_compute[256151]: 2025-12-01 10:22:53.198 256155 DEBUG nova.storage.rbd_utils [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] rbd image 522533d9-3ad6-4908-822e-02ea690da2e7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  1 05:22:53 np0005540825 nova_compute[256151]: 2025-12-01 10:22:53.227 256155 DEBUG nova.storage.rbd_utils [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] rbd image 522533d9-3ad6-4908-822e-02ea690da2e7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  1 05:22:53 np0005540825 nova_compute[256151]: 2025-12-01 10:22:53.231 256155 DEBUG oslo_concurrency.processutils [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/caad95fa2cc8ed03bed2e9851744954b07ec7b34 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:22:53 np0005540825 nova_compute[256151]: 2025-12-01 10:22:53.259 256155 DEBUG nova.policy [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5b56a238daf0445798410e51caada0ff', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9f6be4e572624210b91193c011607c08', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  1 05:22:53 np0005540825 nova_compute[256151]: 2025-12-01 10:22:53.318 256155 DEBUG oslo_concurrency.processutils [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/caad95fa2cc8ed03bed2e9851744954b07ec7b34 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:22:53 np0005540825 nova_compute[256151]: 2025-12-01 10:22:53.318 256155 DEBUG oslo_concurrency.lockutils [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "caad95fa2cc8ed03bed2e9851744954b07ec7b34" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:22:53 np0005540825 nova_compute[256151]: 2025-12-01 10:22:53.319 256155 DEBUG oslo_concurrency.lockutils [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "caad95fa2cc8ed03bed2e9851744954b07ec7b34" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:22:53 np0005540825 nova_compute[256151]: 2025-12-01 10:22:53.319 256155 DEBUG oslo_concurrency.lockutils [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "caad95fa2cc8ed03bed2e9851744954b07ec7b34" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:22:53 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:22:53 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:22:53 np0005540825 nova_compute[256151]: 2025-12-01 10:22:53.346 256155 DEBUG nova.storage.rbd_utils [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] rbd image 522533d9-3ad6-4908-822e-02ea690da2e7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  1 05:22:53 np0005540825 nova_compute[256151]: 2025-12-01 10:22:53.350 256155 DEBUG oslo_concurrency.processutils [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/caad95fa2cc8ed03bed2e9851744954b07ec7b34 522533d9-3ad6-4908-822e-02ea690da2e7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:22:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:22:53.688Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:22:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:22:53.688Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:22:53 np0005540825 nova_compute[256151]: 2025-12-01 10:22:53.769 256155 DEBUG oslo_concurrency.processutils [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/caad95fa2cc8ed03bed2e9851744954b07ec7b34 522533d9-3ad6-4908-822e-02ea690da2e7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:22:53 np0005540825 nova_compute[256151]: 2025-12-01 10:22:53.875 256155 DEBUG nova.storage.rbd_utils [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] resizing rbd image 522533d9-3ad6-4908-822e-02ea690da2e7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  1 05:22:53 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v968: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Dec  1 05:22:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:22:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:22:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:22:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:54 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:22:54 np0005540825 nova_compute[256151]: 2025-12-01 10:22:54.068 256155 DEBUG nova.objects.instance [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lazy-loading 'migration_context' on Instance uuid 522533d9-3ad6-4908-822e-02ea690da2e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 05:22:54 np0005540825 nova_compute[256151]: 2025-12-01 10:22:54.091 256155 DEBUG nova.virt.libvirt.driver [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 05:22:54 np0005540825 nova_compute[256151]: 2025-12-01 10:22:54.092 256155 DEBUG nova.virt.libvirt.driver [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Ensure instance console log exists: /var/lib/nova/instances/522533d9-3ad6-4908-822e-02ea690da2e7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 05:22:54 np0005540825 nova_compute[256151]: 2025-12-01 10:22:54.093 256155 DEBUG oslo_concurrency.lockutils [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:22:54 np0005540825 nova_compute[256151]: 2025-12-01 10:22:54.093 256155 DEBUG oslo_concurrency.lockutils [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:22:54 np0005540825 nova_compute[256151]: 2025-12-01 10:22:54.094 256155 DEBUG oslo_concurrency.lockutils [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:22:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:22:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:22:54.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:22:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:22:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:22:54.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:22:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:22:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:22:54 np0005540825 nova_compute[256151]: 2025-12-01 10:22:54.787 256155 DEBUG nova.network.neutron [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Successfully updated port: a18935ec-0bdc-41b0-9e52-6e3919b1ede3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 05:22:54 np0005540825 nova_compute[256151]: 2025-12-01 10:22:54.811 256155 DEBUG oslo_concurrency.lockutils [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "refresh_cache-522533d9-3ad6-4908-822e-02ea690da2e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 05:22:54 np0005540825 nova_compute[256151]: 2025-12-01 10:22:54.812 256155 DEBUG oslo_concurrency.lockutils [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquired lock "refresh_cache-522533d9-3ad6-4908-822e-02ea690da2e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 05:22:54 np0005540825 nova_compute[256151]: 2025-12-01 10:22:54.812 256155 DEBUG nova.network.neutron [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 05:22:54 np0005540825 nova_compute[256151]: 2025-12-01 10:22:54.874 256155 DEBUG nova.compute.manager [req-edb97eb1-86e3-4ed7-b65b-9f664edaacd1 req-2a434aff-7692-465b-ac88-f5eb8cace96d dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Received event network-changed-a18935ec-0bdc-41b0-9e52-6e3919b1ede3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:22:54 np0005540825 nova_compute[256151]: 2025-12-01 10:22:54.874 256155 DEBUG nova.compute.manager [req-edb97eb1-86e3-4ed7-b65b-9f664edaacd1 req-2a434aff-7692-465b-ac88-f5eb8cace96d dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Refreshing instance network info cache due to event network-changed-a18935ec-0bdc-41b0-9e52-6e3919b1ede3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 05:22:54 np0005540825 nova_compute[256151]: 2025-12-01 10:22:54.875 256155 DEBUG oslo_concurrency.lockutils [req-edb97eb1-86e3-4ed7-b65b-9f664edaacd1 req-2a434aff-7692-465b-ac88-f5eb8cace96d dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "refresh_cache-522533d9-3ad6-4908-822e-02ea690da2e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 05:22:54 np0005540825 nova_compute[256151]: 2025-12-01 10:22:54.977 256155 DEBUG nova.network.neutron [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 05:22:55 np0005540825 nova_compute[256151]: 2025-12-01 10:22:55.026 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:22:55 np0005540825 nova_compute[256151]: 2025-12-01 10:22:55.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:22:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:22:55 np0005540825 nova_compute[256151]: 2025-12-01 10:22:55.762 256155 DEBUG nova.network.neutron [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Updating instance_info_cache with network_info: [{"id": "a18935ec-0bdc-41b0-9e52-6e3919b1ede3", "address": "fa:16:3e:4d:c6:8f", "network": {"id": "434ae97b-0a30-409f-b9ad-87922177cfc0", "bridge": "br-int", "label": "tempest-network-smoke--141806258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa18935ec-0b", "ovs_interfaceid": "a18935ec-0bdc-41b0-9e52-6e3919b1ede3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 05:22:55 np0005540825 nova_compute[256151]: 2025-12-01 10:22:55.788 256155 DEBUG oslo_concurrency.lockutils [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Releasing lock "refresh_cache-522533d9-3ad6-4908-822e-02ea690da2e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 05:22:55 np0005540825 nova_compute[256151]: 2025-12-01 10:22:55.789 256155 DEBUG nova.compute.manager [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Instance network_info: |[{"id": "a18935ec-0bdc-41b0-9e52-6e3919b1ede3", "address": "fa:16:3e:4d:c6:8f", "network": {"id": "434ae97b-0a30-409f-b9ad-87922177cfc0", "bridge": "br-int", "label": "tempest-network-smoke--141806258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa18935ec-0b", "ovs_interfaceid": "a18935ec-0bdc-41b0-9e52-6e3919b1ede3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 05:22:55 np0005540825 nova_compute[256151]: 2025-12-01 10:22:55.790 256155 DEBUG oslo_concurrency.lockutils [req-edb97eb1-86e3-4ed7-b65b-9f664edaacd1 req-2a434aff-7692-465b-ac88-f5eb8cace96d dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquired lock "refresh_cache-522533d9-3ad6-4908-822e-02ea690da2e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 05:22:55 np0005540825 nova_compute[256151]: 2025-12-01 10:22:55.790 256155 DEBUG nova.network.neutron [req-edb97eb1-86e3-4ed7-b65b-9f664edaacd1 req-2a434aff-7692-465b-ac88-f5eb8cace96d dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Refreshing network info cache for port a18935ec-0bdc-41b0-9e52-6e3919b1ede3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 05:22:55 np0005540825 nova_compute[256151]: 2025-12-01 10:22:55.795 256155 DEBUG nova.virt.libvirt.driver [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Start _get_guest_xml network_info=[{"id": "a18935ec-0bdc-41b0-9e52-6e3919b1ede3", "address": "fa:16:3e:4d:c6:8f", "network": {"id": "434ae97b-0a30-409f-b9ad-87922177cfc0", "bridge": "br-int", "label": "tempest-network-smoke--141806258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa18935ec-0b", "ovs_interfaceid": "a18935ec-0bdc-41b0-9e52-6e3919b1ede3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T10:14:19Z,direct_url=<?>,disk_format='qcow2',id=8f75d6de-6ce0-44e1-b417-d0111424475b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9a5734898a6345909986f17ddf57b27d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T10:14:22Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': '8f75d6de-6ce0-44e1-b417-d0111424475b'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 05:22:55 np0005540825 nova_compute[256151]: 2025-12-01 10:22:55.801 256155 WARNING nova.virt.libvirt.driver [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 05:22:55 np0005540825 nova_compute[256151]: 2025-12-01 10:22:55.807 256155 DEBUG nova.virt.libvirt.host [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 05:22:55 np0005540825 nova_compute[256151]: 2025-12-01 10:22:55.808 256155 DEBUG nova.virt.libvirt.host [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 05:22:55 np0005540825 nova_compute[256151]: 2025-12-01 10:22:55.820 256155 DEBUG nova.virt.libvirt.host [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 05:22:55 np0005540825 nova_compute[256151]: 2025-12-01 10:22:55.821 256155 DEBUG nova.virt.libvirt.host [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 05:22:55 np0005540825 nova_compute[256151]: 2025-12-01 10:22:55.822 256155 DEBUG nova.virt.libvirt.driver [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 05:22:55 np0005540825 nova_compute[256151]: 2025-12-01 10:22:55.822 256155 DEBUG nova.virt.hardware [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T10:14:17Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2e731827-1896-49cd-b0cc-12903555d217',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T10:14:19Z,direct_url=<?>,disk_format='qcow2',id=8f75d6de-6ce0-44e1-b417-d0111424475b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9a5734898a6345909986f17ddf57b27d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T10:14:22Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 05:22:55 np0005540825 nova_compute[256151]: 2025-12-01 10:22:55.823 256155 DEBUG nova.virt.hardware [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 05:22:55 np0005540825 nova_compute[256151]: 2025-12-01 10:22:55.823 256155 DEBUG nova.virt.hardware [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 05:22:55 np0005540825 nova_compute[256151]: 2025-12-01 10:22:55.824 256155 DEBUG nova.virt.hardware [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 05:22:55 np0005540825 nova_compute[256151]: 2025-12-01 10:22:55.824 256155 DEBUG nova.virt.hardware [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 05:22:55 np0005540825 nova_compute[256151]: 2025-12-01 10:22:55.825 256155 DEBUG nova.virt.hardware [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 05:22:55 np0005540825 nova_compute[256151]: 2025-12-01 10:22:55.825 256155 DEBUG nova.virt.hardware [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 05:22:55 np0005540825 nova_compute[256151]: 2025-12-01 10:22:55.826 256155 DEBUG nova.virt.hardware [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 05:22:55 np0005540825 nova_compute[256151]: 2025-12-01 10:22:55.826 256155 DEBUG nova.virt.hardware [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 05:22:55 np0005540825 nova_compute[256151]: 2025-12-01 10:22:55.827 256155 DEBUG nova.virt.hardware [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 05:22:55 np0005540825 nova_compute[256151]: 2025-12-01 10:22:55.827 256155 DEBUG nova.virt.hardware [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 05:22:55 np0005540825 nova_compute[256151]: 2025-12-01 10:22:55.832 256155 DEBUG oslo_concurrency.processutils [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:22:55 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v969: 353 pgs: 353 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Dec  1 05:22:56 np0005540825 nova_compute[256151]: 2025-12-01 10:22:56.026 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:22:56 np0005540825 nova_compute[256151]: 2025-12-01 10:22:56.026 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:22:56 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  1 05:22:56 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2132823220' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  1 05:22:56 np0005540825 nova_compute[256151]: 2025-12-01 10:22:56.318 256155 DEBUG oslo_concurrency.processutils [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:22:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:22:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:22:56.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:22:56 np0005540825 nova_compute[256151]: 2025-12-01 10:22:56.354 256155 DEBUG nova.storage.rbd_utils [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] rbd image 522533d9-3ad6-4908-822e-02ea690da2e7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  1 05:22:56 np0005540825 nova_compute[256151]: 2025-12-01 10:22:56.359 256155 DEBUG oslo_concurrency.processutils [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:22:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:22:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:22:56.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:22:56 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  1 05:22:56 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/212026365' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  1 05:22:56 np0005540825 nova_compute[256151]: 2025-12-01 10:22:56.805 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:22:56 np0005540825 nova_compute[256151]: 2025-12-01 10:22:56.820 256155 DEBUG oslo_concurrency.processutils [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:22:56 np0005540825 nova_compute[256151]: 2025-12-01 10:22:56.821 256155 DEBUG nova.virt.libvirt.vif [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T10:22:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1780817085',display_name='tempest-TestNetworkBasicOps-server-1780817085',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1780817085',id=9,image_ref='8f75d6de-6ce0-44e1-b417-d0111424475b',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ6lPUI8If+uE+iAc612r4JOcMLnuJrw8oBkAAei/bkeR4nVGmg5MtOYwFJLqAn6WKdrlL2XmoCCf4uq4d2Cv46om6PiVKynU0P4wjmRTqOBCuS1G0fqaYIKFjbAzVM13A==',key_name='tempest-TestNetworkBasicOps-293168541',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9f6be4e572624210b91193c011607c08',ramdisk_id='',reservation_id='r-n2xqk17z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8f75d6de-6ce0-44e1-b417-d0111424475b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1248115384',owner_user_name='tempest-TestNetworkBasicOps-1248115384-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T10:22:53Z,user_data=None,user_id='5b56a238daf0445798410e51caada0ff',uuid=522533d9-3ad6-4908-822e-02ea690da2e7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a18935ec-0bdc-41b0-9e52-6e3919b1ede3", "address": "fa:16:3e:4d:c6:8f", "network": {"id": "434ae97b-0a30-409f-b9ad-87922177cfc0", "bridge": "br-int", "label": "tempest-network-smoke--141806258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa18935ec-0b", "ovs_interfaceid": "a18935ec-0bdc-41b0-9e52-6e3919b1ede3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 05:22:56 np0005540825 nova_compute[256151]: 2025-12-01 10:22:56.821 256155 DEBUG nova.network.os_vif_util [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Converting VIF {"id": "a18935ec-0bdc-41b0-9e52-6e3919b1ede3", "address": "fa:16:3e:4d:c6:8f", "network": {"id": "434ae97b-0a30-409f-b9ad-87922177cfc0", "bridge": "br-int", "label": "tempest-network-smoke--141806258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa18935ec-0b", "ovs_interfaceid": "a18935ec-0bdc-41b0-9e52-6e3919b1ede3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 05:22:56 np0005540825 nova_compute[256151]: 2025-12-01 10:22:56.822 256155 DEBUG nova.network.os_vif_util [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:c6:8f,bridge_name='br-int',has_traffic_filtering=True,id=a18935ec-0bdc-41b0-9e52-6e3919b1ede3,network=Network(434ae97b-0a30-409f-b9ad-87922177cfc0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa18935ec-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 05:22:56 np0005540825 nova_compute[256151]: 2025-12-01 10:22:56.823 256155 DEBUG nova.objects.instance [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lazy-loading 'pci_devices' on Instance uuid 522533d9-3ad6-4908-822e-02ea690da2e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 05:22:56 np0005540825 nova_compute[256151]: 2025-12-01 10:22:56.837 256155 DEBUG nova.virt.libvirt.driver [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] End _get_guest_xml xml=<domain type="kvm">
Dec  1 05:22:56 np0005540825 nova_compute[256151]:  <uuid>522533d9-3ad6-4908-822e-02ea690da2e7</uuid>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:  <name>instance-00000009</name>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:  <memory>131072</memory>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:  <vcpu>1</vcpu>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:  <metadata>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 05:22:56 np0005540825 nova_compute[256151]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:      <nova:name>tempest-TestNetworkBasicOps-server-1780817085</nova:name>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:      <nova:creationTime>2025-12-01 10:22:55</nova:creationTime>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:      <nova:flavor name="m1.nano">
Dec  1 05:22:56 np0005540825 nova_compute[256151]:        <nova:memory>128</nova:memory>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:        <nova:disk>1</nova:disk>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:        <nova:swap>0</nova:swap>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:        <nova:ephemeral>0</nova:ephemeral>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:        <nova:vcpus>1</nova:vcpus>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:      </nova:flavor>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:      <nova:owner>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:        <nova:user uuid="5b56a238daf0445798410e51caada0ff">tempest-TestNetworkBasicOps-1248115384-project-member</nova:user>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:        <nova:project uuid="9f6be4e572624210b91193c011607c08">tempest-TestNetworkBasicOps-1248115384</nova:project>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:      </nova:owner>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:      <nova:root type="image" uuid="8f75d6de-6ce0-44e1-b417-d0111424475b"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:      <nova:ports>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:        <nova:port uuid="a18935ec-0bdc-41b0-9e52-6e3919b1ede3">
Dec  1 05:22:56 np0005540825 nova_compute[256151]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:        </nova:port>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:      </nova:ports>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    </nova:instance>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:  </metadata>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:  <sysinfo type="smbios">
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <system>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:      <entry name="manufacturer">RDO</entry>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:      <entry name="product">OpenStack Compute</entry>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:      <entry name="serial">522533d9-3ad6-4908-822e-02ea690da2e7</entry>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:      <entry name="uuid">522533d9-3ad6-4908-822e-02ea690da2e7</entry>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:      <entry name="family">Virtual Machine</entry>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    </system>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:  </sysinfo>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:  <os>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <boot dev="hd"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <smbios mode="sysinfo"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:  </os>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:  <features>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <acpi/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <apic/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <vmcoreinfo/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:  </features>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:  <clock offset="utc">
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <timer name="hpet" present="no"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:  </clock>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:  <cpu mode="host-model" match="exact">
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:  </cpu>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:  <devices>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <disk type="network" device="disk">
Dec  1 05:22:56 np0005540825 nova_compute[256151]:      <driver type="raw" cache="none"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:      <source protocol="rbd" name="vms/522533d9-3ad6-4908-822e-02ea690da2e7_disk">
Dec  1 05:22:56 np0005540825 nova_compute[256151]:        <host name="192.168.122.100" port="6789"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:        <host name="192.168.122.102" port="6789"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:        <host name="192.168.122.101" port="6789"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:      </source>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:      <auth username="openstack">
Dec  1 05:22:56 np0005540825 nova_compute[256151]:        <secret type="ceph" uuid="365f19c2-81e5-5edd-b6b4-280555214d3a"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:      </auth>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:      <target dev="vda" bus="virtio"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    </disk>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <disk type="network" device="cdrom">
Dec  1 05:22:56 np0005540825 nova_compute[256151]:      <driver type="raw" cache="none"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:      <source protocol="rbd" name="vms/522533d9-3ad6-4908-822e-02ea690da2e7_disk.config">
Dec  1 05:22:56 np0005540825 nova_compute[256151]:        <host name="192.168.122.100" port="6789"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:        <host name="192.168.122.102" port="6789"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:        <host name="192.168.122.101" port="6789"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:      </source>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:      <auth username="openstack">
Dec  1 05:22:56 np0005540825 nova_compute[256151]:        <secret type="ceph" uuid="365f19c2-81e5-5edd-b6b4-280555214d3a"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:      </auth>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:      <target dev="sda" bus="sata"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    </disk>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <interface type="ethernet">
Dec  1 05:22:56 np0005540825 nova_compute[256151]:      <mac address="fa:16:3e:4d:c6:8f"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:      <model type="virtio"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:      <mtu size="1442"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:      <target dev="tapa18935ec-0b"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    </interface>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <serial type="pty">
Dec  1 05:22:56 np0005540825 nova_compute[256151]:      <log file="/var/lib/nova/instances/522533d9-3ad6-4908-822e-02ea690da2e7/console.log" append="off"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    </serial>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <video>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:      <model type="virtio"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    </video>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <input type="tablet" bus="usb"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <rng model="virtio">
Dec  1 05:22:56 np0005540825 nova_compute[256151]:      <backend model="random">/dev/urandom</backend>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    </rng>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <controller type="usb" index="0"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    <memballoon model="virtio">
Dec  1 05:22:56 np0005540825 nova_compute[256151]:      <stats period="10"/>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:    </memballoon>
Dec  1 05:22:56 np0005540825 nova_compute[256151]:  </devices>
Dec  1 05:22:56 np0005540825 nova_compute[256151]: </domain>
Dec  1 05:22:56 np0005540825 nova_compute[256151]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 05:22:56 np0005540825 nova_compute[256151]: 2025-12-01 10:22:56.839 256155 DEBUG nova.compute.manager [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Preparing to wait for external event network-vif-plugged-a18935ec-0bdc-41b0-9e52-6e3919b1ede3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 05:22:56 np0005540825 nova_compute[256151]: 2025-12-01 10:22:56.839 256155 DEBUG oslo_concurrency.lockutils [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "522533d9-3ad6-4908-822e-02ea690da2e7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:22:56 np0005540825 nova_compute[256151]: 2025-12-01 10:22:56.839 256155 DEBUG oslo_concurrency.lockutils [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "522533d9-3ad6-4908-822e-02ea690da2e7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:22:56 np0005540825 nova_compute[256151]: 2025-12-01 10:22:56.839 256155 DEBUG oslo_concurrency.lockutils [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "522533d9-3ad6-4908-822e-02ea690da2e7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:22:56 np0005540825 nova_compute[256151]: 2025-12-01 10:22:56.840 256155 DEBUG nova.virt.libvirt.vif [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T10:22:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1780817085',display_name='tempest-TestNetworkBasicOps-server-1780817085',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1780817085',id=9,image_ref='8f75d6de-6ce0-44e1-b417-d0111424475b',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ6lPUI8If+uE+iAc612r4JOcMLnuJrw8oBkAAei/bkeR4nVGmg5MtOYwFJLqAn6WKdrlL2XmoCCf4uq4d2Cv46om6PiVKynU0P4wjmRTqOBCuS1G0fqaYIKFjbAzVM13A==',key_name='tempest-TestNetworkBasicOps-293168541',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9f6be4e572624210b91193c011607c08',ramdisk_id='',reservation_id='r-n2xqk17z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8f75d6de-6ce0-44e1-b417-d0111424475b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1248115384',owner_user_name='tempest-TestNetworkBasicOps-1248115384-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T10:22:53Z,user_data=None,user_id='5b56a238daf0445798410e51caada0ff',uuid=522533d9-3ad6-4908-822e-02ea690da2e7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a18935ec-0bdc-41b0-9e52-6e3919b1ede3", "address": "fa:16:3e:4d:c6:8f", "network": {"id": "434ae97b-0a30-409f-b9ad-87922177cfc0", "bridge": "br-int", "label": "tempest-network-smoke--141806258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa18935ec-0b", "ovs_interfaceid": "a18935ec-0bdc-41b0-9e52-6e3919b1ede3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 05:22:56 np0005540825 nova_compute[256151]: 2025-12-01 10:22:56.840 256155 DEBUG nova.network.os_vif_util [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Converting VIF {"id": "a18935ec-0bdc-41b0-9e52-6e3919b1ede3", "address": "fa:16:3e:4d:c6:8f", "network": {"id": "434ae97b-0a30-409f-b9ad-87922177cfc0", "bridge": "br-int", "label": "tempest-network-smoke--141806258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa18935ec-0b", "ovs_interfaceid": "a18935ec-0bdc-41b0-9e52-6e3919b1ede3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 05:22:56 np0005540825 nova_compute[256151]: 2025-12-01 10:22:56.841 256155 DEBUG nova.network.os_vif_util [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:c6:8f,bridge_name='br-int',has_traffic_filtering=True,id=a18935ec-0bdc-41b0-9e52-6e3919b1ede3,network=Network(434ae97b-0a30-409f-b9ad-87922177cfc0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa18935ec-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 05:22:56 np0005540825 nova_compute[256151]: 2025-12-01 10:22:56.841 256155 DEBUG os_vif [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:c6:8f,bridge_name='br-int',has_traffic_filtering=True,id=a18935ec-0bdc-41b0-9e52-6e3919b1ede3,network=Network(434ae97b-0a30-409f-b9ad-87922177cfc0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa18935ec-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 05:22:56 np0005540825 nova_compute[256151]: 2025-12-01 10:22:56.842 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:22:56 np0005540825 nova_compute[256151]: 2025-12-01 10:22:56.842 256155 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:22:56 np0005540825 nova_compute[256151]: 2025-12-01 10:22:56.843 256155 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 05:22:56 np0005540825 nova_compute[256151]: 2025-12-01 10:22:56.846 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:22:56 np0005540825 nova_compute[256151]: 2025-12-01 10:22:56.846 256155 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa18935ec-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:22:56 np0005540825 nova_compute[256151]: 2025-12-01 10:22:56.847 256155 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa18935ec-0b, col_values=(('external_ids', {'iface-id': 'a18935ec-0bdc-41b0-9e52-6e3919b1ede3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4d:c6:8f', 'vm-uuid': '522533d9-3ad6-4908-822e-02ea690da2e7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:22:56 np0005540825 nova_compute[256151]: 2025-12-01 10:22:56.848 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:22:56 np0005540825 NetworkManager[48963]: <info>  [1764584576.8495] manager: (tapa18935ec-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Dec  1 05:22:56 np0005540825 nova_compute[256151]: 2025-12-01 10:22:56.851 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 05:22:56 np0005540825 nova_compute[256151]: 2025-12-01 10:22:56.855 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:22:56 np0005540825 nova_compute[256151]: 2025-12-01 10:22:56.857 256155 INFO os_vif [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:c6:8f,bridge_name='br-int',has_traffic_filtering=True,id=a18935ec-0bdc-41b0-9e52-6e3919b1ede3,network=Network(434ae97b-0a30-409f-b9ad-87922177cfc0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa18935ec-0b')#033[00m
Dec  1 05:22:56 np0005540825 nova_compute[256151]: 2025-12-01 10:22:56.917 256155 DEBUG nova.virt.libvirt.driver [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 05:22:56 np0005540825 nova_compute[256151]: 2025-12-01 10:22:56.917 256155 DEBUG nova.virt.libvirt.driver [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 05:22:56 np0005540825 nova_compute[256151]: 2025-12-01 10:22:56.917 256155 DEBUG nova.virt.libvirt.driver [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] No VIF found with MAC fa:16:3e:4d:c6:8f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 05:22:56 np0005540825 nova_compute[256151]: 2025-12-01 10:22:56.918 256155 INFO nova.virt.libvirt.driver [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Using config drive#033[00m
Dec  1 05:22:56 np0005540825 nova_compute[256151]: 2025-12-01 10:22:56.945 256155 DEBUG nova.storage.rbd_utils [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] rbd image 522533d9-3ad6-4908-822e-02ea690da2e7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  1 05:22:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:22:57.246Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:22:57 np0005540825 nova_compute[256151]: 2025-12-01 10:22:57.254 256155 INFO nova.virt.libvirt.driver [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Creating config drive at /var/lib/nova/instances/522533d9-3ad6-4908-822e-02ea690da2e7/disk.config#033[00m
Dec  1 05:22:57 np0005540825 nova_compute[256151]: 2025-12-01 10:22:57.258 256155 DEBUG oslo_concurrency.processutils [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/522533d9-3ad6-4908-822e-02ea690da2e7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbfd929_u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:22:57 np0005540825 nova_compute[256151]: 2025-12-01 10:22:57.307 256155 DEBUG nova.network.neutron [req-edb97eb1-86e3-4ed7-b65b-9f664edaacd1 req-2a434aff-7692-465b-ac88-f5eb8cace96d dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Updated VIF entry in instance network info cache for port a18935ec-0bdc-41b0-9e52-6e3919b1ede3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 05:22:57 np0005540825 nova_compute[256151]: 2025-12-01 10:22:57.308 256155 DEBUG nova.network.neutron [req-edb97eb1-86e3-4ed7-b65b-9f664edaacd1 req-2a434aff-7692-465b-ac88-f5eb8cace96d dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Updating instance_info_cache with network_info: [{"id": "a18935ec-0bdc-41b0-9e52-6e3919b1ede3", "address": "fa:16:3e:4d:c6:8f", "network": {"id": "434ae97b-0a30-409f-b9ad-87922177cfc0", "bridge": "br-int", "label": "tempest-network-smoke--141806258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa18935ec-0b", "ovs_interfaceid": "a18935ec-0bdc-41b0-9e52-6e3919b1ede3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 05:22:57 np0005540825 nova_compute[256151]: 2025-12-01 10:22:57.323 256155 DEBUG oslo_concurrency.lockutils [req-edb97eb1-86e3-4ed7-b65b-9f664edaacd1 req-2a434aff-7692-465b-ac88-f5eb8cace96d dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Releasing lock "refresh_cache-522533d9-3ad6-4908-822e-02ea690da2e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 05:22:57 np0005540825 nova_compute[256151]: 2025-12-01 10:22:57.382 256155 DEBUG oslo_concurrency.processutils [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/522533d9-3ad6-4908-822e-02ea690da2e7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbfd929_u" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:22:57 np0005540825 nova_compute[256151]: 2025-12-01 10:22:57.410 256155 DEBUG nova.storage.rbd_utils [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] rbd image 522533d9-3ad6-4908-822e-02ea690da2e7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  1 05:22:57 np0005540825 nova_compute[256151]: 2025-12-01 10:22:57.413 256155 DEBUG oslo_concurrency.processutils [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/522533d9-3ad6-4908-822e-02ea690da2e7/disk.config 522533d9-3ad6-4908-822e-02ea690da2e7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:22:57 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v970: 353 pgs: 353 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  1 05:22:58 np0005540825 nova_compute[256151]: 2025-12-01 10:22:58.304 256155 DEBUG oslo_concurrency.processutils [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/522533d9-3ad6-4908-822e-02ea690da2e7/disk.config 522533d9-3ad6-4908-822e-02ea690da2e7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.891s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:22:58 np0005540825 nova_compute[256151]: 2025-12-01 10:22:58.305 256155 INFO nova.virt.libvirt.driver [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Deleting local config drive /var/lib/nova/instances/522533d9-3ad6-4908-822e-02ea690da2e7/disk.config because it was imported into RBD.#033[00m
Dec  1 05:22:58 np0005540825 systemd[1]: Starting libvirt secret daemon...
Dec  1 05:22:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:22:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:22:58.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:22:58 np0005540825 systemd[1]: Started libvirt secret daemon.
Dec  1 05:22:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:22:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:22:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:22:58.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:22:58 np0005540825 kernel: tapa18935ec-0b: entered promiscuous mode
Dec  1 05:22:58 np0005540825 NetworkManager[48963]: <info>  [1764584578.4090] manager: (tapa18935ec-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/43)
Dec  1 05:22:58 np0005540825 ovn_controller[153404]: 2025-12-01T10:22:58Z|00060|binding|INFO|Claiming lport a18935ec-0bdc-41b0-9e52-6e3919b1ede3 for this chassis.
Dec  1 05:22:58 np0005540825 ovn_controller[153404]: 2025-12-01T10:22:58Z|00061|binding|INFO|a18935ec-0bdc-41b0-9e52-6e3919b1ede3: Claiming fa:16:3e:4d:c6:8f 10.100.0.7
Dec  1 05:22:58 np0005540825 nova_compute[256151]: 2025-12-01 10:22:58.409 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:22:58 np0005540825 nova_compute[256151]: 2025-12-01 10:22:58.417 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:22:58 np0005540825 nova_compute[256151]: 2025-12-01 10:22:58.427 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:22:58 np0005540825 NetworkManager[48963]: <info>  [1764584578.4336] manager: (patch-provnet-da274a4a-a49c-4f01-b728-391696cd2672-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Dec  1 05:22:58 np0005540825 nova_compute[256151]: 2025-12-01 10:22:58.433 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:22:58 np0005540825 NetworkManager[48963]: <info>  [1764584578.4347] manager: (patch-br-int-to-provnet-da274a4a-a49c-4f01-b728-391696cd2672): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:22:58.442 163291 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:c6:8f 10.100.0.7'], port_security=['fa:16:3e:4d:c6:8f 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1647305629', 'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '522533d9-3ad6-4908-822e-02ea690da2e7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-434ae97b-0a30-409f-b9ad-87922177cfc0', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1647305629', 'neutron:project_id': '9f6be4e572624210b91193c011607c08', 'neutron:revision_number': '7', 'neutron:security_group_ids': '11501149-732d-4202-ad97-ece49baad0dd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.229'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3b8707eb-b5e9-4720-9f51-1840140506cb, chassis=[<ovs.db.idl.Row object at 0x7f3429b436d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3429b436d0>], logical_port=a18935ec-0bdc-41b0-9e52-6e3919b1ede3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:22:58.444 163291 INFO neutron.agent.ovn.metadata.agent [-] Port a18935ec-0bdc-41b0-9e52-6e3919b1ede3 in datapath 434ae97b-0a30-409f-b9ad-87922177cfc0 bound to our chassis#033[00m
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:22:58.446 163291 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 434ae97b-0a30-409f-b9ad-87922177cfc0#033[00m
Dec  1 05:22:58 np0005540825 systemd-udevd[274137]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:22:58.459 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[e97bf162-7bbb-4d4b-b6bc-00bf85b975ef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:22:58.460 163291 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap434ae97b-01 in ovnmeta-434ae97b-0a30-409f-b9ad-87922177cfc0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:22:58.461 262668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap434ae97b-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:22:58.461 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[3bd3ab9a-56af-4322-82ff-c6b65bb7b8e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:22:58.462 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[aee07dfc-211c-452d-a50a-c5f02cc2aa80]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:22:58 np0005540825 systemd-machined[216307]: New machine qemu-4-instance-00000009.
Dec  1 05:22:58 np0005540825 NetworkManager[48963]: <info>  [1764584578.4696] device (tapa18935ec-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 05:22:58 np0005540825 NetworkManager[48963]: <info>  [1764584578.4711] device (tapa18935ec-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:22:58.474 163408 DEBUG oslo.privsep.daemon [-] privsep: reply[669f9a8d-4123-4616-b99e-2284e6bf2718]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:22:58.499 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[315828df-62f1-452e-9f06-e069002bc9f0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:22:58 np0005540825 nova_compute[256151]: 2025-12-01 10:22:58.511 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:22:58 np0005540825 systemd[1]: Started Virtual Machine qemu-4-instance-00000009.
Dec  1 05:22:58 np0005540825 nova_compute[256151]: 2025-12-01 10:22:58.513 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:22:58 np0005540825 nova_compute[256151]: 2025-12-01 10:22:58.520 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:22:58.523 262728 DEBUG oslo.privsep.daemon [-] privsep: reply[c04a5209-7a34-424f-b104-de4890662992]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:22:58.530 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[17337f5f-2a3b-4f6f-8007-26314deb112e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:22:58 np0005540825 NetworkManager[48963]: <info>  [1764584578.5319] manager: (tap434ae97b-00): new Veth device (/org/freedesktop/NetworkManager/Devices/46)
Dec  1 05:22:58 np0005540825 ovn_controller[153404]: 2025-12-01T10:22:58Z|00062|binding|INFO|Setting lport a18935ec-0bdc-41b0-9e52-6e3919b1ede3 ovn-installed in OVS
Dec  1 05:22:58 np0005540825 ovn_controller[153404]: 2025-12-01T10:22:58Z|00063|binding|INFO|Setting lport a18935ec-0bdc-41b0-9e52-6e3919b1ede3 up in Southbound
Dec  1 05:22:58 np0005540825 nova_compute[256151]: 2025-12-01 10:22:58.533 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:22:58 np0005540825 systemd-udevd[274142]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:22:58.563 262728 DEBUG oslo.privsep.daemon [-] privsep: reply[e50a0dcc-286c-448e-aa12-a0de34952958]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:22:58.565 262728 DEBUG oslo.privsep.daemon [-] privsep: reply[b8609db5-0832-4b86-a8ac-1c8e50c3ca4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:22:58 np0005540825 NetworkManager[48963]: <info>  [1764584578.5816] device (tap434ae97b-00): carrier: link connected
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:22:58.585 262728 DEBUG oslo.privsep.daemon [-] privsep: reply[ea78ae38-372b-47b7-8b23-f87efdb184fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:22:58.598 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[24f34bf8-cd2b-4d92-8c70-b9af3d2bbbdd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap434ae97b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:51:ff'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444721, 'reachable_time': 36164, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274171, 'error': None, 'target': 'ovnmeta-434ae97b-0a30-409f-b9ad-87922177cfc0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:22:58.608 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[5719a93e-5f7b-407a-8c5a-e423f986154e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1f:51ff'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 444721, 'tstamp': 444721}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274172, 'error': None, 'target': 'ovnmeta-434ae97b-0a30-409f-b9ad-87922177cfc0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:22:58.622 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[cce700d4-7817-46c2-9c49-2a081810d564]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap434ae97b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:51:ff'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444721, 'reachable_time': 36164, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274173, 'error': None, 'target': 'ovnmeta-434ae97b-0a30-409f-b9ad-87922177cfc0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:22:58.648 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[453cf7eb-8731-4401-954e-482728eee026]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:22:58.696 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[4a5dcdf0-8be9-4d09-b051-10aaa2122322]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:22:58.697 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap434ae97b-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:22:58.697 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:22:58.698 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap434ae97b-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:22:58 np0005540825 nova_compute[256151]: 2025-12-01 10:22:58.699 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:22:58 np0005540825 NetworkManager[48963]: <info>  [1764584578.7000] manager: (tap434ae97b-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Dec  1 05:22:58 np0005540825 kernel: tap434ae97b-00: entered promiscuous mode
Dec  1 05:22:58 np0005540825 nova_compute[256151]: 2025-12-01 10:22:58.701 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:22:58.701 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap434ae97b-00, col_values=(('external_ids', {'iface-id': '45d4e66b-9979-4881-8973-52e53617afe5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:22:58 np0005540825 nova_compute[256151]: 2025-12-01 10:22:58.702 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:22:58 np0005540825 ovn_controller[153404]: 2025-12-01T10:22:58Z|00064|binding|INFO|Releasing lport 45d4e66b-9979-4881-8973-52e53617afe5 from this chassis (sb_readonly=0)
Dec  1 05:22:58 np0005540825 nova_compute[256151]: 2025-12-01 10:22:58.716 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:22:58.717 163291 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/434ae97b-0a30-409f-b9ad-87922177cfc0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/434ae97b-0a30-409f-b9ad-87922177cfc0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:22:58.718 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[12584241-39de-4b49-8c65-032857cdc7be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:22:58.718 163291 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]: global
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]:    log         /dev/log local0 debug
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]:    log-tag     haproxy-metadata-proxy-434ae97b-0a30-409f-b9ad-87922177cfc0
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]:    user        root
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]:    group       root
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]:    maxconn     1024
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]:    pidfile     /var/lib/neutron/external/pids/434ae97b-0a30-409f-b9ad-87922177cfc0.pid.haproxy
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]:    daemon
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]: 
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]: defaults
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]:    log global
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]:    mode http
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]:    option httplog
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]:    option dontlognull
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]:    option http-server-close
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]:    option forwardfor
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]:    retries                 3
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]:    timeout http-request    30s
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]:    timeout connect         30s
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]:    timeout client          32s
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]:    timeout server          32s
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]:    timeout http-keep-alive 30s
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]: 
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]: 
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]: listen listener
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]:    bind 169.254.169.254:80
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]:    server metadata /var/lib/neutron/metadata_proxy
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]:    http-request add-header X-OVN-Network-ID 434ae97b-0a30-409f-b9ad-87922177cfc0
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  1 05:22:58 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:22:58.719 163291 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-434ae97b-0a30-409f-b9ad-87922177cfc0', 'env', 'PROCESS_TAG=haproxy-434ae97b-0a30-409f-b9ad-87922177cfc0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/434ae97b-0a30-409f-b9ad-87922177cfc0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  1 05:22:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:22:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:59 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:22:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:59 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:22:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:22:59 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:22:59 np0005540825 nova_compute[256151]: 2025-12-01 10:22:59.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:22:59 np0005540825 podman[274205]: 2025-12-01 10:22:59.068864078 +0000 UTC m=+0.023986976 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 05:22:59 np0005540825 nova_compute[256151]: 2025-12-01 10:22:59.183 256155 DEBUG nova.compute.manager [req-0e866265-5ae2-4f78-b06d-8b6ce07d0506 req-49a39836-a3b2-4984-813e-b2461a2fcbf7 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Received event network-vif-plugged-a18935ec-0bdc-41b0-9e52-6e3919b1ede3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:22:59 np0005540825 nova_compute[256151]: 2025-12-01 10:22:59.184 256155 DEBUG oslo_concurrency.lockutils [req-0e866265-5ae2-4f78-b06d-8b6ce07d0506 req-49a39836-a3b2-4984-813e-b2461a2fcbf7 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "522533d9-3ad6-4908-822e-02ea690da2e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:22:59 np0005540825 nova_compute[256151]: 2025-12-01 10:22:59.185 256155 DEBUG oslo_concurrency.lockutils [req-0e866265-5ae2-4f78-b06d-8b6ce07d0506 req-49a39836-a3b2-4984-813e-b2461a2fcbf7 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "522533d9-3ad6-4908-822e-02ea690da2e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:22:59 np0005540825 nova_compute[256151]: 2025-12-01 10:22:59.185 256155 DEBUG oslo_concurrency.lockutils [req-0e866265-5ae2-4f78-b06d-8b6ce07d0506 req-49a39836-a3b2-4984-813e-b2461a2fcbf7 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "522533d9-3ad6-4908-822e-02ea690da2e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:22:59 np0005540825 nova_compute[256151]: 2025-12-01 10:22:59.186 256155 DEBUG nova.compute.manager [req-0e866265-5ae2-4f78-b06d-8b6ce07d0506 req-49a39836-a3b2-4984-813e-b2461a2fcbf7 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Processing event network-vif-plugged-a18935ec-0bdc-41b0-9e52-6e3919b1ede3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 05:22:59 np0005540825 podman[274205]: 2025-12-01 10:22:59.237945044 +0000 UTC m=+0.193067952 container create 36d15a5d01e901c72a3efbb1e018db2eb64145857a41f70181e524af5956f9a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-434ae97b-0a30-409f-b9ad-87922177cfc0, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  1 05:22:59 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:22:59.283 163291 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '36:10:da', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '4e:5c:35:98:90:37'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 05:22:59 np0005540825 nova_compute[256151]: 2025-12-01 10:22:59.283 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:22:59 np0005540825 systemd[1]: Started libpod-conmon-36d15a5d01e901c72a3efbb1e018db2eb64145857a41f70181e524af5956f9a3.scope.
Dec  1 05:22:59 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:22:59 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b89b460dcb930564b11fa1a9307111ea81a43591ece66686dfc0bbf0fdd26ecf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  1 05:22:59 np0005540825 podman[274205]: 2025-12-01 10:22:59.392984354 +0000 UTC m=+0.348107272 container init 36d15a5d01e901c72a3efbb1e018db2eb64145857a41f70181e524af5956f9a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-434ae97b-0a30-409f-b9ad-87922177cfc0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec  1 05:22:59 np0005540825 podman[274205]: 2025-12-01 10:22:59.403843307 +0000 UTC m=+0.358966205 container start 36d15a5d01e901c72a3efbb1e018db2eb64145857a41f70181e524af5956f9a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-434ae97b-0a30-409f-b9ad-87922177cfc0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 05:22:59 np0005540825 neutron-haproxy-ovnmeta-434ae97b-0a30-409f-b9ad-87922177cfc0[274220]: [NOTICE]   (274224) : New worker (274226) forked
Dec  1 05:22:59 np0005540825 neutron-haproxy-ovnmeta-434ae97b-0a30-409f-b9ad-87922177cfc0[274220]: [NOTICE]   (274224) : Loading success.
Dec  1 05:22:59 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:22:59.476 163291 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 05:22:59 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v971: 353 pgs: 353 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  1 05:23:00 np0005540825 nova_compute[256151]: 2025-12-01 10:23:00.026 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:23:00 np0005540825 nova_compute[256151]: 2025-12-01 10:23:00.027 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 05:23:00 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Dec  1 05:23:00 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:23:00.255728) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  1 05:23:00 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Dec  1 05:23:00 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584580255806, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1509, "num_deletes": 502, "total_data_size": 2280338, "memory_usage": 2341808, "flush_reason": "Manual Compaction"}
Dec  1 05:23:00 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Dec  1 05:23:00 np0005540825 nova_compute[256151]: 2025-12-01 10:23:00.288 256155 DEBUG nova.virt.driver [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] Emitting event <LifecycleEvent: 1764584580.2881992, 522533d9-3ad6-4908-822e-02ea690da2e7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 05:23:00 np0005540825 nova_compute[256151]: 2025-12-01 10:23:00.290 256155 INFO nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] VM Started (Lifecycle Event)#033[00m
Dec  1 05:23:00 np0005540825 nova_compute[256151]: 2025-12-01 10:23:00.296 256155 DEBUG nova.compute.manager [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 05:23:00 np0005540825 nova_compute[256151]: 2025-12-01 10:23:00.299 256155 DEBUG nova.virt.libvirt.driver [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 05:23:00 np0005540825 nova_compute[256151]: 2025-12-01 10:23:00.304 256155 INFO nova.virt.libvirt.driver [-] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Instance spawned successfully.#033[00m
Dec  1 05:23:00 np0005540825 nova_compute[256151]: 2025-12-01 10:23:00.305 256155 DEBUG nova.virt.libvirt.driver [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 05:23:00 np0005540825 nova_compute[256151]: 2025-12-01 10:23:00.313 256155 DEBUG nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 05:23:00 np0005540825 nova_compute[256151]: 2025-12-01 10:23:00.317 256155 DEBUG nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 05:23:00 np0005540825 nova_compute[256151]: 2025-12-01 10:23:00.326 256155 DEBUG nova.virt.libvirt.driver [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 05:23:00 np0005540825 nova_compute[256151]: 2025-12-01 10:23:00.327 256155 DEBUG nova.virt.libvirt.driver [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 05:23:00 np0005540825 nova_compute[256151]: 2025-12-01 10:23:00.328 256155 DEBUG nova.virt.libvirt.driver [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 05:23:00 np0005540825 nova_compute[256151]: 2025-12-01 10:23:00.329 256155 DEBUG nova.virt.libvirt.driver [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 05:23:00 np0005540825 nova_compute[256151]: 2025-12-01 10:23:00.329 256155 DEBUG nova.virt.libvirt.driver [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 05:23:00 np0005540825 nova_compute[256151]: 2025-12-01 10:23:00.330 256155 DEBUG nova.virt.libvirt.driver [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 05:23:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:23:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:23:00.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:23:00 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584580351380, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 2094203, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28114, "largest_seqno": 29622, "table_properties": {"data_size": 2087844, "index_size": 3049, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 17625, "raw_average_key_size": 19, "raw_value_size": 2072866, "raw_average_value_size": 2334, "num_data_blocks": 132, "num_entries": 888, "num_filter_entries": 888, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764584476, "oldest_key_time": 1764584476, "file_creation_time": 1764584580, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Dec  1 05:23:00 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 95705 microseconds, and 9389 cpu microseconds.
Dec  1 05:23:00 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 05:23:00 np0005540825 nova_compute[256151]: 2025-12-01 10:23:00.364 256155 INFO nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 05:23:00 np0005540825 nova_compute[256151]: 2025-12-01 10:23:00.365 256155 DEBUG nova.virt.driver [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] Emitting event <LifecycleEvent: 1764584580.288373, 522533d9-3ad6-4908-822e-02ea690da2e7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 05:23:00 np0005540825 nova_compute[256151]: 2025-12-01 10:23:00.365 256155 INFO nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] VM Paused (Lifecycle Event)#033[00m
Dec  1 05:23:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:23:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:23:00.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:23:00 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:23:00.351439) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 2094203 bytes OK
Dec  1 05:23:00 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:23:00.351464) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Dec  1 05:23:00 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:23:00.388691) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Dec  1 05:23:00 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:23:00.388755) EVENT_LOG_v1 {"time_micros": 1764584580388740, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  1 05:23:00 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:23:00.388792) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  1 05:23:00 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 2272789, prev total WAL file size 2272789, number of live WAL files 2.
Dec  1 05:23:00 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:23:00 np0005540825 nova_compute[256151]: 2025-12-01 10:23:00.392 256155 DEBUG nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 05:23:00 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:23:00.393452) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Dec  1 05:23:00 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  1 05:23:00 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(2045KB)], [62(16MB)]
Dec  1 05:23:00 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584580393528, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 18939875, "oldest_snapshot_seqno": -1}
Dec  1 05:23:00 np0005540825 nova_compute[256151]: 2025-12-01 10:23:00.397 256155 DEBUG nova.virt.driver [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] Emitting event <LifecycleEvent: 1764584580.2986767, 522533d9-3ad6-4908-822e-02ea690da2e7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 05:23:00 np0005540825 nova_compute[256151]: 2025-12-01 10:23:00.397 256155 INFO nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] VM Resumed (Lifecycle Event)#033[00m
Dec  1 05:23:00 np0005540825 nova_compute[256151]: 2025-12-01 10:23:00.405 256155 INFO nova.compute.manager [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Took 7.28 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 05:23:00 np0005540825 nova_compute[256151]: 2025-12-01 10:23:00.406 256155 DEBUG nova.compute.manager [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 05:23:00 np0005540825 nova_compute[256151]: 2025-12-01 10:23:00.428 256155 DEBUG nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 05:23:00 np0005540825 nova_compute[256151]: 2025-12-01 10:23:00.432 256155 DEBUG nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 05:23:00 np0005540825 nova_compute[256151]: 2025-12-01 10:23:00.463 256155 INFO nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 05:23:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:23:00 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5814 keys, 12756428 bytes, temperature: kUnknown
Dec  1 05:23:00 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584580481753, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 12756428, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12719309, "index_size": 21457, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14597, "raw_key_size": 150239, "raw_average_key_size": 25, "raw_value_size": 12616019, "raw_average_value_size": 2169, "num_data_blocks": 859, "num_entries": 5814, "num_filter_entries": 5814, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764582410, "oldest_key_time": 0, "file_creation_time": 1764584580, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Dec  1 05:23:00 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 05:23:00 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:23:00.482746) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 12756428 bytes
Dec  1 05:23:00 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:23:00.484008) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 214.6 rd, 144.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 16.1 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(15.1) write-amplify(6.1) OK, records in: 6830, records dropped: 1016 output_compression: NoCompression
Dec  1 05:23:00 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:23:00.484039) EVENT_LOG_v1 {"time_micros": 1764584580484027, "job": 34, "event": "compaction_finished", "compaction_time_micros": 88240, "compaction_time_cpu_micros": 38959, "output_level": 6, "num_output_files": 1, "total_output_size": 12756428, "num_input_records": 6830, "num_output_records": 5814, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  1 05:23:00 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:23:00 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584580484665, "job": 34, "event": "table_file_deletion", "file_number": 64}
Dec  1 05:23:00 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:23:00 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584580488541, "job": 34, "event": "table_file_deletion", "file_number": 62}
Dec  1 05:23:00 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:23:00.390141) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:23:00 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:23:00.488624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:23:00 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:23:00.488631) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:23:00 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:23:00.488633) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:23:00 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:23:00.488635) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:23:00 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:23:00.488636) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:23:00 np0005540825 nova_compute[256151]: 2025-12-01 10:23:00.489 256155 INFO nova.compute.manager [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Took 8.39 seconds to build instance.#033[00m
Dec  1 05:23:00 np0005540825 nova_compute[256151]: 2025-12-01 10:23:00.511 256155 DEBUG oslo_concurrency.lockutils [None req-68b55ad0-3f73-460a-8cf4-f618de5b79b4 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "522533d9-3ad6-4908-822e-02ea690da2e7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:23:01 np0005540825 nova_compute[256151]: 2025-12-01 10:23:01.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:23:01 np0005540825 nova_compute[256151]: 2025-12-01 10:23:01.064 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:23:01 np0005540825 nova_compute[256151]: 2025-12-01 10:23:01.065 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:23:01 np0005540825 nova_compute[256151]: 2025-12-01 10:23:01.066 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:23:01 np0005540825 nova_compute[256151]: 2025-12-01 10:23:01.066 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 05:23:01 np0005540825 nova_compute[256151]: 2025-12-01 10:23:01.067 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:23:01 np0005540825 nova_compute[256151]: 2025-12-01 10:23:01.301 256155 DEBUG nova.compute.manager [req-a8a7037b-8972-49e0-84a2-2c58c08ef57e req-fbc9cef9-9813-49c2-bbf0-e0202e8ff608 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Received event network-vif-plugged-a18935ec-0bdc-41b0-9e52-6e3919b1ede3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:23:01 np0005540825 nova_compute[256151]: 2025-12-01 10:23:01.302 256155 DEBUG oslo_concurrency.lockutils [req-a8a7037b-8972-49e0-84a2-2c58c08ef57e req-fbc9cef9-9813-49c2-bbf0-e0202e8ff608 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "522533d9-3ad6-4908-822e-02ea690da2e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:23:01 np0005540825 nova_compute[256151]: 2025-12-01 10:23:01.303 256155 DEBUG oslo_concurrency.lockutils [req-a8a7037b-8972-49e0-84a2-2c58c08ef57e req-fbc9cef9-9813-49c2-bbf0-e0202e8ff608 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "522533d9-3ad6-4908-822e-02ea690da2e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:23:01 np0005540825 nova_compute[256151]: 2025-12-01 10:23:01.304 256155 DEBUG oslo_concurrency.lockutils [req-a8a7037b-8972-49e0-84a2-2c58c08ef57e req-fbc9cef9-9813-49c2-bbf0-e0202e8ff608 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "522533d9-3ad6-4908-822e-02ea690da2e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:23:01 np0005540825 nova_compute[256151]: 2025-12-01 10:23:01.305 256155 DEBUG nova.compute.manager [req-a8a7037b-8972-49e0-84a2-2c58c08ef57e req-fbc9cef9-9813-49c2-bbf0-e0202e8ff608 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] No waiting events found dispatching network-vif-plugged-a18935ec-0bdc-41b0-9e52-6e3919b1ede3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 05:23:01 np0005540825 nova_compute[256151]: 2025-12-01 10:23:01.305 256155 WARNING nova.compute.manager [req-a8a7037b-8972-49e0-84a2-2c58c08ef57e req-fbc9cef9-9813-49c2-bbf0-e0202e8ff608 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Received unexpected event network-vif-plugged-a18935ec-0bdc-41b0-9e52-6e3919b1ede3 for instance with vm_state active and task_state None.#033[00m
Dec  1 05:23:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:23:01] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Dec  1 05:23:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:23:01] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Dec  1 05:23:01 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:01.478 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4d9738cf-2abf-48e2-9303-677669784912, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:23:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:23:01 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3547558712' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:23:01 np0005540825 nova_compute[256151]: 2025-12-01 10:23:01.559 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:23:01 np0005540825 nova_compute[256151]: 2025-12-01 10:23:01.626 256155 DEBUG nova.virt.libvirt.driver [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  1 05:23:01 np0005540825 nova_compute[256151]: 2025-12-01 10:23:01.626 256155 DEBUG nova.virt.libvirt.driver [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  1 05:23:01 np0005540825 podman[274302]: 2025-12-01 10:23:01.659058852 +0000 UTC m=+0.059102061 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec  1 05:23:01 np0005540825 nova_compute[256151]: 2025-12-01 10:23:01.803 256155 WARNING nova.virt.libvirt.driver [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 05:23:01 np0005540825 nova_compute[256151]: 2025-12-01 10:23:01.804 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4444MB free_disk=59.967525482177734GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 05:23:01 np0005540825 nova_compute[256151]: 2025-12-01 10:23:01.805 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:23:01 np0005540825 nova_compute[256151]: 2025-12-01 10:23:01.805 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:23:01 np0005540825 nova_compute[256151]: 2025-12-01 10:23:01.807 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:01 np0005540825 nova_compute[256151]: 2025-12-01 10:23:01.848 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:01 np0005540825 nova_compute[256151]: 2025-12-01 10:23:01.924 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Instance 522533d9-3ad6-4908-822e-02ea690da2e7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 05:23:01 np0005540825 nova_compute[256151]: 2025-12-01 10:23:01.925 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 05:23:01 np0005540825 nova_compute[256151]: 2025-12-01 10:23:01.925 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 05:23:01 np0005540825 nova_compute[256151]: 2025-12-01 10:23:01.965 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:23:01 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v972: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Dec  1 05:23:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:23:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:23:02.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:23:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:23:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:23:02.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:23:02 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:23:02 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2044304106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:23:02 np0005540825 nova_compute[256151]: 2025-12-01 10:23:02.432 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:23:02 np0005540825 nova_compute[256151]: 2025-12-01 10:23:02.438 256155 DEBUG nova.compute.provider_tree [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed in ProviderTree for provider: 5efe20fe-1981-4bd9-8786-d9fddc89a5ae update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 05:23:02 np0005540825 nova_compute[256151]: 2025-12-01 10:23:02.467 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 05:23:02 np0005540825 nova_compute[256151]: 2025-12-01 10:23:02.496 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 05:23:02 np0005540825 nova_compute[256151]: 2025-12-01 10:23:02.497 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:23:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:23:03.689Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:23:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:23:03.690Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:23:03 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v973: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Dec  1 05:23:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:23:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:04 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:23:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:04 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:23:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:04 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:23:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:23:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:23:04.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:23:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:23:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:23:04.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:23:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:04.580 163291 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:23:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:04.581 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:23:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:04.581 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:23:04 np0005540825 nova_compute[256151]: 2025-12-01 10:23:04.751 256155 DEBUG oslo_concurrency.lockutils [None req-7b00aba3-ee68-4c07-9ff0-5c398af01300 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "522533d9-3ad6-4908-822e-02ea690da2e7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:23:04 np0005540825 nova_compute[256151]: 2025-12-01 10:23:04.752 256155 DEBUG oslo_concurrency.lockutils [None req-7b00aba3-ee68-4c07-9ff0-5c398af01300 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "522533d9-3ad6-4908-822e-02ea690da2e7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:23:04 np0005540825 nova_compute[256151]: 2025-12-01 10:23:04.752 256155 DEBUG oslo_concurrency.lockutils [None req-7b00aba3-ee68-4c07-9ff0-5c398af01300 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "522533d9-3ad6-4908-822e-02ea690da2e7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:23:04 np0005540825 nova_compute[256151]: 2025-12-01 10:23:04.752 256155 DEBUG oslo_concurrency.lockutils [None req-7b00aba3-ee68-4c07-9ff0-5c398af01300 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "522533d9-3ad6-4908-822e-02ea690da2e7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:23:04 np0005540825 nova_compute[256151]: 2025-12-01 10:23:04.753 256155 DEBUG oslo_concurrency.lockutils [None req-7b00aba3-ee68-4c07-9ff0-5c398af01300 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "522533d9-3ad6-4908-822e-02ea690da2e7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:23:04 np0005540825 nova_compute[256151]: 2025-12-01 10:23:04.754 256155 INFO nova.compute.manager [None req-7b00aba3-ee68-4c07-9ff0-5c398af01300 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Terminating instance#033[00m
Dec  1 05:23:04 np0005540825 nova_compute[256151]: 2025-12-01 10:23:04.755 256155 DEBUG nova.compute.manager [None req-7b00aba3-ee68-4c07-9ff0-5c398af01300 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 05:23:04 np0005540825 kernel: tapa18935ec-0b (unregistering): left promiscuous mode
Dec  1 05:23:04 np0005540825 NetworkManager[48963]: <info>  [1764584584.8657] device (tapa18935ec-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 05:23:04 np0005540825 nova_compute[256151]: 2025-12-01 10:23:04.882 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:04 np0005540825 nova_compute[256151]: 2025-12-01 10:23:04.884 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:04 np0005540825 ovn_controller[153404]: 2025-12-01T10:23:04Z|00065|binding|INFO|Releasing lport a18935ec-0bdc-41b0-9e52-6e3919b1ede3 from this chassis (sb_readonly=0)
Dec  1 05:23:04 np0005540825 ovn_controller[153404]: 2025-12-01T10:23:04Z|00066|binding|INFO|Setting lport a18935ec-0bdc-41b0-9e52-6e3919b1ede3 down in Southbound
Dec  1 05:23:04 np0005540825 ovn_controller[153404]: 2025-12-01T10:23:04Z|00067|binding|INFO|Removing iface tapa18935ec-0b ovn-installed in OVS
Dec  1 05:23:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:04.890 163291 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:c6:8f 10.100.0.7'], port_security=['fa:16:3e:4d:c6:8f 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1647305629', 'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '522533d9-3ad6-4908-822e-02ea690da2e7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-434ae97b-0a30-409f-b9ad-87922177cfc0', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1647305629', 'neutron:project_id': '9f6be4e572624210b91193c011607c08', 'neutron:revision_number': '9', 'neutron:security_group_ids': '11501149-732d-4202-ad97-ece49baad0dd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.229', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3b8707eb-b5e9-4720-9f51-1840140506cb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3429b436d0>], logical_port=a18935ec-0bdc-41b0-9e52-6e3919b1ede3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3429b436d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 05:23:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:04.892 163291 INFO neutron.agent.ovn.metadata.agent [-] Port a18935ec-0bdc-41b0-9e52-6e3919b1ede3 in datapath 434ae97b-0a30-409f-b9ad-87922177cfc0 unbound from our chassis#033[00m
Dec  1 05:23:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:04.893 163291 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 434ae97b-0a30-409f-b9ad-87922177cfc0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  1 05:23:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:04.897 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[7e6e2ccf-663a-4ec7-b537-f09278f4d2bc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:23:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:04.898 163291 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-434ae97b-0a30-409f-b9ad-87922177cfc0 namespace which is not needed anymore#033[00m
Dec  1 05:23:04 np0005540825 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000009.scope: Deactivated successfully.
Dec  1 05:23:04 np0005540825 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000009.scope: Consumed 6.238s CPU time.
Dec  1 05:23:04 np0005540825 nova_compute[256151]: 2025-12-01 10:23:04.933 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:04 np0005540825 systemd-machined[216307]: Machine qemu-4-instance-00000009 terminated.
Dec  1 05:23:05 np0005540825 nova_compute[256151]: 2025-12-01 10:23:05.007 256155 INFO nova.virt.libvirt.driver [-] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Instance destroyed successfully.#033[00m
Dec  1 05:23:05 np0005540825 nova_compute[256151]: 2025-12-01 10:23:05.008 256155 DEBUG nova.objects.instance [None req-7b00aba3-ee68-4c07-9ff0-5c398af01300 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lazy-loading 'resources' on Instance uuid 522533d9-3ad6-4908-822e-02ea690da2e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 05:23:05 np0005540825 nova_compute[256151]: 2025-12-01 10:23:05.023 256155 DEBUG nova.virt.libvirt.vif [None req-7b00aba3-ee68-4c07-9ff0-5c398af01300 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T10:22:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1780817085',display_name='tempest-TestNetworkBasicOps-server-1780817085',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1780817085',id=9,image_ref='8f75d6de-6ce0-44e1-b417-d0111424475b',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ6lPUI8If+uE+iAc612r4JOcMLnuJrw8oBkAAei/bkeR4nVGmg5MtOYwFJLqAn6WKdrlL2XmoCCf4uq4d2Cv46om6PiVKynU0P4wjmRTqOBCuS1G0fqaYIKFjbAzVM13A==',key_name='tempest-TestNetworkBasicOps-293168541',keypairs=<?>,launch_index=0,launched_at=2025-12-01T10:23:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9f6be4e572624210b91193c011607c08',ramdisk_id='',reservation_id='r-n2xqk17z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8f75d6de-6ce0-44e1-b417-d0111424475b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1248115384',owner_user_name='tempest-TestNetworkBasicOps-1248115384-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T10:23:00Z,user_data=None,user_id='5b56a238daf0445798410e51caada0ff',uuid=522533d9-3ad6-4908-822e-02ea690da2e7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a18935ec-0bdc-41b0-9e52-6e3919b1ede3", "address": "fa:16:3e:4d:c6:8f", "network": {"id": "434ae97b-0a30-409f-b9ad-87922177cfc0", "bridge": "br-int", "label": "tempest-network-smoke--141806258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa18935ec-0b", "ovs_interfaceid": "a18935ec-0bdc-41b0-9e52-6e3919b1ede3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 05:23:05 np0005540825 nova_compute[256151]: 2025-12-01 10:23:05.024 256155 DEBUG nova.network.os_vif_util [None req-7b00aba3-ee68-4c07-9ff0-5c398af01300 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Converting VIF {"id": "a18935ec-0bdc-41b0-9e52-6e3919b1ede3", "address": "fa:16:3e:4d:c6:8f", "network": {"id": "434ae97b-0a30-409f-b9ad-87922177cfc0", "bridge": "br-int", "label": "tempest-network-smoke--141806258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa18935ec-0b", "ovs_interfaceid": "a18935ec-0bdc-41b0-9e52-6e3919b1ede3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 05:23:05 np0005540825 nova_compute[256151]: 2025-12-01 10:23:05.025 256155 DEBUG nova.network.os_vif_util [None req-7b00aba3-ee68-4c07-9ff0-5c398af01300 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:c6:8f,bridge_name='br-int',has_traffic_filtering=True,id=a18935ec-0bdc-41b0-9e52-6e3919b1ede3,network=Network(434ae97b-0a30-409f-b9ad-87922177cfc0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa18935ec-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 05:23:05 np0005540825 nova_compute[256151]: 2025-12-01 10:23:05.026 256155 DEBUG os_vif [None req-7b00aba3-ee68-4c07-9ff0-5c398af01300 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:c6:8f,bridge_name='br-int',has_traffic_filtering=True,id=a18935ec-0bdc-41b0-9e52-6e3919b1ede3,network=Network(434ae97b-0a30-409f-b9ad-87922177cfc0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa18935ec-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 05:23:05 np0005540825 nova_compute[256151]: 2025-12-01 10:23:05.029 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:05 np0005540825 nova_compute[256151]: 2025-12-01 10:23:05.030 256155 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa18935ec-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:23:05 np0005540825 nova_compute[256151]: 2025-12-01 10:23:05.075 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:05 np0005540825 nova_compute[256151]: 2025-12-01 10:23:05.078 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 05:23:05 np0005540825 nova_compute[256151]: 2025-12-01 10:23:05.081 256155 INFO os_vif [None req-7b00aba3-ee68-4c07-9ff0-5c398af01300 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:c6:8f,bridge_name='br-int',has_traffic_filtering=True,id=a18935ec-0bdc-41b0-9e52-6e3919b1ede3,network=Network(434ae97b-0a30-409f-b9ad-87922177cfc0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa18935ec-0b')#033[00m
Dec  1 05:23:05 np0005540825 neutron-haproxy-ovnmeta-434ae97b-0a30-409f-b9ad-87922177cfc0[274220]: [NOTICE]   (274224) : haproxy version is 2.8.14-c23fe91
Dec  1 05:23:05 np0005540825 neutron-haproxy-ovnmeta-434ae97b-0a30-409f-b9ad-87922177cfc0[274220]: [NOTICE]   (274224) : path to executable is /usr/sbin/haproxy
Dec  1 05:23:05 np0005540825 neutron-haproxy-ovnmeta-434ae97b-0a30-409f-b9ad-87922177cfc0[274220]: [WARNING]  (274224) : Exiting Master process...
Dec  1 05:23:05 np0005540825 neutron-haproxy-ovnmeta-434ae97b-0a30-409f-b9ad-87922177cfc0[274220]: [ALERT]    (274224) : Current worker (274226) exited with code 143 (Terminated)
Dec  1 05:23:05 np0005540825 neutron-haproxy-ovnmeta-434ae97b-0a30-409f-b9ad-87922177cfc0[274220]: [WARNING]  (274224) : All workers exited. Exiting... (0)
Dec  1 05:23:05 np0005540825 systemd[1]: libpod-36d15a5d01e901c72a3efbb1e018db2eb64145857a41f70181e524af5956f9a3.scope: Deactivated successfully.
Dec  1 05:23:05 np0005540825 podman[274379]: 2025-12-01 10:23:05.110850147 +0000 UTC m=+0.060620871 container died 36d15a5d01e901c72a3efbb1e018db2eb64145857a41f70181e524af5956f9a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-434ae97b-0a30-409f-b9ad-87922177cfc0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  1 05:23:05 np0005540825 nova_compute[256151]: 2025-12-01 10:23:05.112 256155 DEBUG nova.compute.manager [req-ecf09b1e-4504-4715-b650-fe97524f6bf7 req-4dcdda72-2a67-4a60-9f88-7041395c7ff6 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Received event network-vif-unplugged-a18935ec-0bdc-41b0-9e52-6e3919b1ede3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:23:05 np0005540825 nova_compute[256151]: 2025-12-01 10:23:05.113 256155 DEBUG oslo_concurrency.lockutils [req-ecf09b1e-4504-4715-b650-fe97524f6bf7 req-4dcdda72-2a67-4a60-9f88-7041395c7ff6 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "522533d9-3ad6-4908-822e-02ea690da2e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:23:05 np0005540825 nova_compute[256151]: 2025-12-01 10:23:05.114 256155 DEBUG oslo_concurrency.lockutils [req-ecf09b1e-4504-4715-b650-fe97524f6bf7 req-4dcdda72-2a67-4a60-9f88-7041395c7ff6 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "522533d9-3ad6-4908-822e-02ea690da2e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:23:05 np0005540825 nova_compute[256151]: 2025-12-01 10:23:05.114 256155 DEBUG oslo_concurrency.lockutils [req-ecf09b1e-4504-4715-b650-fe97524f6bf7 req-4dcdda72-2a67-4a60-9f88-7041395c7ff6 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "522533d9-3ad6-4908-822e-02ea690da2e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:23:05 np0005540825 nova_compute[256151]: 2025-12-01 10:23:05.115 256155 DEBUG nova.compute.manager [req-ecf09b1e-4504-4715-b650-fe97524f6bf7 req-4dcdda72-2a67-4a60-9f88-7041395c7ff6 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] No waiting events found dispatching network-vif-unplugged-a18935ec-0bdc-41b0-9e52-6e3919b1ede3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 05:23:05 np0005540825 nova_compute[256151]: 2025-12-01 10:23:05.115 256155 DEBUG nova.compute.manager [req-ecf09b1e-4504-4715-b650-fe97524f6bf7 req-4dcdda72-2a67-4a60-9f88-7041395c7ff6 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Received event network-vif-unplugged-a18935ec-0bdc-41b0-9e52-6e3919b1ede3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  1 05:23:05 np0005540825 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-36d15a5d01e901c72a3efbb1e018db2eb64145857a41f70181e524af5956f9a3-userdata-shm.mount: Deactivated successfully.
Dec  1 05:23:05 np0005540825 systemd[1]: var-lib-containers-storage-overlay-b89b460dcb930564b11fa1a9307111ea81a43591ece66686dfc0bbf0fdd26ecf-merged.mount: Deactivated successfully.
Dec  1 05:23:05 np0005540825 podman[274379]: 2025-12-01 10:23:05.156175698 +0000 UTC m=+0.105946382 container cleanup 36d15a5d01e901c72a3efbb1e018db2eb64145857a41f70181e524af5956f9a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-434ae97b-0a30-409f-b9ad-87922177cfc0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  1 05:23:05 np0005540825 systemd[1]: libpod-conmon-36d15a5d01e901c72a3efbb1e018db2eb64145857a41f70181e524af5956f9a3.scope: Deactivated successfully.
Dec  1 05:23:05 np0005540825 podman[274426]: 2025-12-01 10:23:05.258367731 +0000 UTC m=+0.069150403 container remove 36d15a5d01e901c72a3efbb1e018db2eb64145857a41f70181e524af5956f9a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-434ae97b-0a30-409f-b9ad-87922177cfc0, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  1 05:23:05 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:05.268 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[d3af86d0-659b-4311-8002-d1c5caa45c20]: (4, ('Mon Dec  1 10:23:05 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-434ae97b-0a30-409f-b9ad-87922177cfc0 (36d15a5d01e901c72a3efbb1e018db2eb64145857a41f70181e524af5956f9a3)\n36d15a5d01e901c72a3efbb1e018db2eb64145857a41f70181e524af5956f9a3\nMon Dec  1 10:23:05 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-434ae97b-0a30-409f-b9ad-87922177cfc0 (36d15a5d01e901c72a3efbb1e018db2eb64145857a41f70181e524af5956f9a3)\n36d15a5d01e901c72a3efbb1e018db2eb64145857a41f70181e524af5956f9a3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:23:05 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:05.270 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[3d443468-6768-4df7-9f01-e1f4e1b65335]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:23:05 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:05.271 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap434ae97b-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:23:05 np0005540825 nova_compute[256151]: 2025-12-01 10:23:05.273 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:05 np0005540825 kernel: tap434ae97b-00: left promiscuous mode
Dec  1 05:23:05 np0005540825 nova_compute[256151]: 2025-12-01 10:23:05.303 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:05 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:05.306 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[b3ff192c-35db-4975-bf93-ae0358f26f9d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:23:05 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:05.325 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[dda35666-775c-4da8-9233-b6cee2fbfdd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:23:05 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:05.327 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[369f994a-aa8b-4a2e-88fc-0f0812241284]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:23:05 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:05.348 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[99535669-4bab-498f-a97b-69f236001a50]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444715, 'reachable_time': 26146, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274447, 'error': None, 'target': 'ovnmeta-434ae97b-0a30-409f-b9ad-87922177cfc0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:23:05 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:05.352 163408 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-434ae97b-0a30-409f-b9ad-87922177cfc0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  1 05:23:05 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:05.352 163408 DEBUG oslo.privsep.daemon [-] privsep: reply[77c2b247-1777-4952-b50d-5d12be233bdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:23:05 np0005540825 systemd[1]: run-netns-ovnmeta\x2d434ae97b\x2d0a30\x2d409f\x2db9ad\x2d87922177cfc0.mount: Deactivated successfully.
Dec  1 05:23:05 np0005540825 podman[274439]: 2025-12-01 10:23:05.449206614 +0000 UTC m=+0.121157188 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  1 05:23:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:23:05 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v974: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Dec  1 05:23:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:23:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:23:06.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:23:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000025s ======
Dec  1 05:23:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:23:06.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  1 05:23:06 np0005540825 nova_compute[256151]: 2025-12-01 10:23:06.498 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:23:06 np0005540825 nova_compute[256151]: 2025-12-01 10:23:06.809 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:07 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  1 05:23:07 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/503124220' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  1 05:23:07 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  1 05:23:07 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/503124220' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  1 05:23:07 np0005540825 nova_compute[256151]: 2025-12-01 10:23:07.194 256155 DEBUG nova.compute.manager [req-f86067cc-3aaa-46fc-9d80-293ddf8f6d96 req-cfa900e0-845d-44fd-8e6b-b61deb522ded dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Received event network-vif-plugged-a18935ec-0bdc-41b0-9e52-6e3919b1ede3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:23:07 np0005540825 nova_compute[256151]: 2025-12-01 10:23:07.194 256155 DEBUG oslo_concurrency.lockutils [req-f86067cc-3aaa-46fc-9d80-293ddf8f6d96 req-cfa900e0-845d-44fd-8e6b-b61deb522ded dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "522533d9-3ad6-4908-822e-02ea690da2e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:23:07 np0005540825 nova_compute[256151]: 2025-12-01 10:23:07.194 256155 DEBUG oslo_concurrency.lockutils [req-f86067cc-3aaa-46fc-9d80-293ddf8f6d96 req-cfa900e0-845d-44fd-8e6b-b61deb522ded dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "522533d9-3ad6-4908-822e-02ea690da2e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:23:07 np0005540825 nova_compute[256151]: 2025-12-01 10:23:07.194 256155 DEBUG oslo_concurrency.lockutils [req-f86067cc-3aaa-46fc-9d80-293ddf8f6d96 req-cfa900e0-845d-44fd-8e6b-b61deb522ded dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "522533d9-3ad6-4908-822e-02ea690da2e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:23:07 np0005540825 nova_compute[256151]: 2025-12-01 10:23:07.195 256155 DEBUG nova.compute.manager [req-f86067cc-3aaa-46fc-9d80-293ddf8f6d96 req-cfa900e0-845d-44fd-8e6b-b61deb522ded dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] No waiting events found dispatching network-vif-plugged-a18935ec-0bdc-41b0-9e52-6e3919b1ede3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 05:23:07 np0005540825 nova_compute[256151]: 2025-12-01 10:23:07.195 256155 WARNING nova.compute.manager [req-f86067cc-3aaa-46fc-9d80-293ddf8f6d96 req-cfa900e0-845d-44fd-8e6b-b61deb522ded dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Received unexpected event network-vif-plugged-a18935ec-0bdc-41b0-9e52-6e3919b1ede3 for instance with vm_state active and task_state deleting.#033[00m
Dec  1 05:23:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:23:07.247Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:23:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:23:07.247Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:23:07 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v975: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 80 op/s
Dec  1 05:23:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:23:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:23:08.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:23:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:23:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:23:08.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:23:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:23:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:09 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:23:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:09 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:23:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:09 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:23:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=cleanup t=2025-12-01T10:23:09.217294342Z level=info msg="Completed cleanup jobs" duration=9.716303ms
Dec  1 05:23:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=plugins.update.checker t=2025-12-01T10:23:09.354489016Z level=info msg="Update check succeeded" duration=82.320115ms
Dec  1 05:23:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=grafana.update.checker t=2025-12-01T10:23:09.367789083Z level=info msg="Update check succeeded" duration=89.259516ms
Dec  1 05:23:09 np0005540825 nova_compute[256151]: 2025-12-01 10:23:09.504 256155 INFO nova.virt.libvirt.driver [None req-7b00aba3-ee68-4c07-9ff0-5c398af01300 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Deleting instance files /var/lib/nova/instances/522533d9-3ad6-4908-822e-02ea690da2e7_del#033[00m
Dec  1 05:23:09 np0005540825 nova_compute[256151]: 2025-12-01 10:23:09.505 256155 INFO nova.virt.libvirt.driver [None req-7b00aba3-ee68-4c07-9ff0-5c398af01300 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Deletion of /var/lib/nova/instances/522533d9-3ad6-4908-822e-02ea690da2e7_del complete#033[00m
Dec  1 05:23:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:23:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:23:09 np0005540825 nova_compute[256151]: 2025-12-01 10:23:09.556 256155 INFO nova.compute.manager [None req-7b00aba3-ee68-4c07-9ff0-5c398af01300 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Took 4.80 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 05:23:09 np0005540825 nova_compute[256151]: 2025-12-01 10:23:09.556 256155 DEBUG oslo.service.loopingcall [None req-7b00aba3-ee68-4c07-9ff0-5c398af01300 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 05:23:09 np0005540825 nova_compute[256151]: 2025-12-01 10:23:09.557 256155 DEBUG nova.compute.manager [-] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 05:23:09 np0005540825 nova_compute[256151]: 2025-12-01 10:23:09.557 256155 DEBUG nova.network.neutron [-] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 05:23:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:23:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:23:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:23:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:23:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:23:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:23:09 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v976: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 80 op/s
Dec  1 05:23:10 np0005540825 nova_compute[256151]: 2025-12-01 10:23:10.120 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:23:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:23:10.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:23:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:23:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:23:10.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:23:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:23:10 np0005540825 nova_compute[256151]: 2025-12-01 10:23:10.876 256155 DEBUG nova.network.neutron [-] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 05:23:10 np0005540825 nova_compute[256151]: 2025-12-01 10:23:10.896 256155 INFO nova.compute.manager [-] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Took 1.34 seconds to deallocate network for instance.#033[00m
Dec  1 05:23:10 np0005540825 nova_compute[256151]: 2025-12-01 10:23:10.949 256155 DEBUG oslo_concurrency.lockutils [None req-7b00aba3-ee68-4c07-9ff0-5c398af01300 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:23:10 np0005540825 nova_compute[256151]: 2025-12-01 10:23:10.950 256155 DEBUG oslo_concurrency.lockutils [None req-7b00aba3-ee68-4c07-9ff0-5c398af01300 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:23:11 np0005540825 nova_compute[256151]: 2025-12-01 10:23:11.023 256155 DEBUG oslo_concurrency.processutils [None req-7b00aba3-ee68-4c07-9ff0-5c398af01300 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:23:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:23:11] "GET /metrics HTTP/1.1" 200 48559 "" "Prometheus/2.51.0"
Dec  1 05:23:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:23:11] "GET /metrics HTTP/1.1" 200 48559 "" "Prometheus/2.51.0"
Dec  1 05:23:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:23:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3760003608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:23:11 np0005540825 nova_compute[256151]: 2025-12-01 10:23:11.532 256155 DEBUG oslo_concurrency.processutils [None req-7b00aba3-ee68-4c07-9ff0-5c398af01300 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:23:11 np0005540825 nova_compute[256151]: 2025-12-01 10:23:11.542 256155 DEBUG nova.compute.provider_tree [None req-7b00aba3-ee68-4c07-9ff0-5c398af01300 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Inventory has not changed in ProviderTree for provider: 5efe20fe-1981-4bd9-8786-d9fddc89a5ae update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 05:23:11 np0005540825 nova_compute[256151]: 2025-12-01 10:23:11.557 256155 DEBUG nova.scheduler.client.report [None req-7b00aba3-ee68-4c07-9ff0-5c398af01300 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Inventory has not changed for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 05:23:11 np0005540825 nova_compute[256151]: 2025-12-01 10:23:11.597 256155 DEBUG oslo_concurrency.lockutils [None req-7b00aba3-ee68-4c07-9ff0-5c398af01300 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:23:11 np0005540825 nova_compute[256151]: 2025-12-01 10:23:11.647 256155 INFO nova.scheduler.client.report [None req-7b00aba3-ee68-4c07-9ff0-5c398af01300 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Deleted allocations for instance 522533d9-3ad6-4908-822e-02ea690da2e7#033[00m
Dec  1 05:23:11 np0005540825 nova_compute[256151]: 2025-12-01 10:23:11.750 256155 DEBUG oslo_concurrency.lockutils [None req-7b00aba3-ee68-4c07-9ff0-5c398af01300 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "522533d9-3ad6-4908-822e-02ea690da2e7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.998s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:23:11 np0005540825 nova_compute[256151]: 2025-12-01 10:23:11.811 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:11 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v977: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Dec  1 05:23:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:23:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:23:12.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:23:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:23:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:23:12.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:23:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:23:13.691Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:23:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:23:13.691Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:23:13 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v978: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 90 op/s
Dec  1 05:23:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:23:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:23:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:23:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:14 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:23:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:23:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:23:14.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:23:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:23:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:23:14.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:23:15 np0005540825 nova_compute[256151]: 2025-12-01 10:23:15.122 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:23:15 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v979: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 90 op/s
Dec  1 05:23:16 np0005540825 podman[274523]: 2025-12-01 10:23:16.267080469 +0000 UTC m=+0.119169607 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  1 05:23:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:23:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:23:16.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:23:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:23:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:23:16.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:23:16 np0005540825 nova_compute[256151]: 2025-12-01 10:23:16.750 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:16 np0005540825 nova_compute[256151]: 2025-12-01 10:23:16.823 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:16 np0005540825 nova_compute[256151]: 2025-12-01 10:23:16.826 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:23:17.248Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:23:17 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v980: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 1.1 KiB/s wr, 20 op/s
Dec  1 05:23:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:23:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:23:18.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:23:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:23:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:23:18.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:23:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:23:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:23:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:23:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:19 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:23:19 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v981: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 1.1 KiB/s wr, 20 op/s
Dec  1 05:23:20 np0005540825 nova_compute[256151]: 2025-12-01 10:23:20.001 256155 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764584585.0008008, 522533d9-3ad6-4908-822e-02ea690da2e7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 05:23:20 np0005540825 nova_compute[256151]: 2025-12-01 10:23:20.002 256155 INFO nova.compute.manager [-] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] VM Stopped (Lifecycle Event)#033[00m
Dec  1 05:23:20 np0005540825 nova_compute[256151]: 2025-12-01 10:23:20.027 256155 DEBUG nova.compute.manager [None req-0d55379f-b591-41ce-88ae-f1ee214badc3 - - - - - -] [instance: 522533d9-3ad6-4908-822e-02ea690da2e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 05:23:20 np0005540825 nova_compute[256151]: 2025-12-01 10:23:20.167 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:23:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:23:20.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:23:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:23:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:23:20.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:23:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:23:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:23:21] "GET /metrics HTTP/1.1" 200 48559 "" "Prometheus/2.51.0"
Dec  1 05:23:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:23:21] "GET /metrics HTTP/1.1" 200 48559 "" "Prometheus/2.51.0"
Dec  1 05:23:21 np0005540825 nova_compute[256151]: 2025-12-01 10:23:21.826 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:21 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v982: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 1.1 KiB/s wr, 21 op/s
Dec  1 05:23:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:23:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:23:22.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:23:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:23:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:23:22.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:23:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:23:23.693Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:23:23 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v983: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:23:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:23:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:23:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:23:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:24 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:23:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:23:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:23:24.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:23:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:23:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:23:24.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:23:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:23:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:23:25 np0005540825 nova_compute[256151]: 2025-12-01 10:23:25.169 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:23:25 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v984: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:23:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:23:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:23:26.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:23:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:23:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:23:26.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:23:26 np0005540825 nova_compute[256151]: 2025-12-01 10:23:26.828 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:23:27.250Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:23:27 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v985: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:23:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:23:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:23:28.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:23:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:23:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:23:28.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:23:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:23:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:23:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:23:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:29 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:23:29 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v986: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:23:30 np0005540825 nova_compute[256151]: 2025-12-01 10:23:30.222 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:23:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:23:30.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:23:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:23:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:23:30.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:23:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:23:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:23:31] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:23:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:23:31] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:23:31 np0005540825 nova_compute[256151]: 2025-12-01 10:23:31.831 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:31 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v987: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:23:32 np0005540825 podman[274591]: 2025-12-01 10:23:32.213056161 +0000 UTC m=+0.071841731 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 05:23:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:23:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:23:32.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:23:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:23:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:23:32.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:23:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:23:33.694Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:23:33 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v988: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:23:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:23:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:23:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:23:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:34 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:23:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:23:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:23:34.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:23:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:23:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:23:34.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:23:35 np0005540825 nova_compute[256151]: 2025-12-01 10:23:35.273 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:23:35 np0005540825 nova_compute[256151]: 2025-12-01 10:23:35.917 256155 DEBUG oslo_concurrency.lockutils [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "a0d2df94-256c-4d12-b661-60feb351cd23" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:23:35 np0005540825 nova_compute[256151]: 2025-12-01 10:23:35.918 256155 DEBUG oslo_concurrency.lockutils [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "a0d2df94-256c-4d12-b661-60feb351cd23" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:23:35 np0005540825 nova_compute[256151]: 2025-12-01 10:23:35.942 256155 DEBUG nova.compute.manager [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 05:23:35 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v989: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:23:36 np0005540825 nova_compute[256151]: 2025-12-01 10:23:36.018 256155 DEBUG oslo_concurrency.lockutils [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:23:36 np0005540825 nova_compute[256151]: 2025-12-01 10:23:36.019 256155 DEBUG oslo_concurrency.lockutils [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:23:36 np0005540825 nova_compute[256151]: 2025-12-01 10:23:36.027 256155 DEBUG nova.virt.hardware [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 05:23:36 np0005540825 nova_compute[256151]: 2025-12-01 10:23:36.028 256155 INFO nova.compute.claims [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 05:23:36 np0005540825 nova_compute[256151]: 2025-12-01 10:23:36.166 256155 DEBUG oslo_concurrency.processutils [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:23:36 np0005540825 podman[274614]: 2025-12-01 10:23:36.231669045 +0000 UTC m=+0.085466658 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  1 05:23:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:23:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:23:36.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:23:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:23:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:23:36.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:23:36 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:23:36 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2076139831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:23:36 np0005540825 nova_compute[256151]: 2025-12-01 10:23:36.655 256155 DEBUG oslo_concurrency.processutils [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:23:36 np0005540825 nova_compute[256151]: 2025-12-01 10:23:36.662 256155 DEBUG nova.compute.provider_tree [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Inventory has not changed in ProviderTree for provider: 5efe20fe-1981-4bd9-8786-d9fddc89a5ae update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 05:23:36 np0005540825 nova_compute[256151]: 2025-12-01 10:23:36.716 256155 DEBUG nova.scheduler.client.report [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Inventory has not changed for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 05:23:36 np0005540825 nova_compute[256151]: 2025-12-01 10:23:36.806 256155 DEBUG oslo_concurrency.lockutils [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:23:36 np0005540825 nova_compute[256151]: 2025-12-01 10:23:36.807 256155 DEBUG nova.compute.manager [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 05:23:36 np0005540825 nova_compute[256151]: 2025-12-01 10:23:36.832 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:36 np0005540825 nova_compute[256151]: 2025-12-01 10:23:36.879 256155 DEBUG nova.compute.manager [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 05:23:36 np0005540825 nova_compute[256151]: 2025-12-01 10:23:36.880 256155 DEBUG nova.network.neutron [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 05:23:36 np0005540825 nova_compute[256151]: 2025-12-01 10:23:36.901 256155 INFO nova.virt.libvirt.driver [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 05:23:36 np0005540825 nova_compute[256151]: 2025-12-01 10:23:36.918 256155 DEBUG nova.compute.manager [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 05:23:37 np0005540825 nova_compute[256151]: 2025-12-01 10:23:37.012 256155 DEBUG nova.compute.manager [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 05:23:37 np0005540825 nova_compute[256151]: 2025-12-01 10:23:37.014 256155 DEBUG nova.virt.libvirt.driver [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 05:23:37 np0005540825 nova_compute[256151]: 2025-12-01 10:23:37.015 256155 INFO nova.virt.libvirt.driver [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Creating image(s)#033[00m
Dec  1 05:23:37 np0005540825 nova_compute[256151]: 2025-12-01 10:23:37.057 256155 DEBUG nova.storage.rbd_utils [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] rbd image a0d2df94-256c-4d12-b661-60feb351cd23_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  1 05:23:37 np0005540825 nova_compute[256151]: 2025-12-01 10:23:37.092 256155 DEBUG nova.storage.rbd_utils [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] rbd image a0d2df94-256c-4d12-b661-60feb351cd23_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  1 05:23:37 np0005540825 nova_compute[256151]: 2025-12-01 10:23:37.128 256155 DEBUG nova.storage.rbd_utils [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] rbd image a0d2df94-256c-4d12-b661-60feb351cd23_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  1 05:23:37 np0005540825 nova_compute[256151]: 2025-12-01 10:23:37.132 256155 DEBUG oslo_concurrency.processutils [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/caad95fa2cc8ed03bed2e9851744954b07ec7b34 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:23:37 np0005540825 nova_compute[256151]: 2025-12-01 10:23:37.159 256155 DEBUG nova.policy [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5b56a238daf0445798410e51caada0ff', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9f6be4e572624210b91193c011607c08', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  1 05:23:37 np0005540825 nova_compute[256151]: 2025-12-01 10:23:37.220 256155 DEBUG oslo_concurrency.processutils [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/caad95fa2cc8ed03bed2e9851744954b07ec7b34 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:23:37 np0005540825 nova_compute[256151]: 2025-12-01 10:23:37.221 256155 DEBUG oslo_concurrency.lockutils [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "caad95fa2cc8ed03bed2e9851744954b07ec7b34" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:23:37 np0005540825 nova_compute[256151]: 2025-12-01 10:23:37.223 256155 DEBUG oslo_concurrency.lockutils [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "caad95fa2cc8ed03bed2e9851744954b07ec7b34" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:23:37 np0005540825 nova_compute[256151]: 2025-12-01 10:23:37.223 256155 DEBUG oslo_concurrency.lockutils [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "caad95fa2cc8ed03bed2e9851744954b07ec7b34" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:23:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:23:37.251Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:23:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:23:37.252Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:23:37 np0005540825 nova_compute[256151]: 2025-12-01 10:23:37.380 256155 DEBUG nova.storage.rbd_utils [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] rbd image a0d2df94-256c-4d12-b661-60feb351cd23_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  1 05:23:37 np0005540825 nova_compute[256151]: 2025-12-01 10:23:37.385 256155 DEBUG oslo_concurrency.processutils [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/caad95fa2cc8ed03bed2e9851744954b07ec7b34 a0d2df94-256c-4d12-b661-60feb351cd23_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:23:37 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v990: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:23:38 np0005540825 nova_compute[256151]: 2025-12-01 10:23:38.228 256155 DEBUG nova.network.neutron [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Successfully created port: 1ca40fc4-7826-4815-a0f0-7b7650b2569c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  1 05:23:38 np0005540825 nova_compute[256151]: 2025-12-01 10:23:38.247 256155 DEBUG oslo_concurrency.processutils [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/caad95fa2cc8ed03bed2e9851744954b07ec7b34 a0d2df94-256c-4d12-b661-60feb351cd23_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.862s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:23:38 np0005540825 nova_compute[256151]: 2025-12-01 10:23:38.343 256155 DEBUG nova.storage.rbd_utils [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] resizing rbd image a0d2df94-256c-4d12-b661-60feb351cd23_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  1 05:23:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:23:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:23:38.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:23:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:23:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:23:38.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:23:38 np0005540825 nova_compute[256151]: 2025-12-01 10:23:38.594 256155 DEBUG nova.objects.instance [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lazy-loading 'migration_context' on Instance uuid a0d2df94-256c-4d12-b661-60feb351cd23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 05:23:38 np0005540825 nova_compute[256151]: 2025-12-01 10:23:38.612 256155 DEBUG nova.virt.libvirt.driver [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 05:23:38 np0005540825 nova_compute[256151]: 2025-12-01 10:23:38.613 256155 DEBUG nova.virt.libvirt.driver [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Ensure instance console log exists: /var/lib/nova/instances/a0d2df94-256c-4d12-b661-60feb351cd23/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 05:23:38 np0005540825 nova_compute[256151]: 2025-12-01 10:23:38.614 256155 DEBUG oslo_concurrency.lockutils [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:23:38 np0005540825 nova_compute[256151]: 2025-12-01 10:23:38.614 256155 DEBUG oslo_concurrency.lockutils [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:23:38 np0005540825 nova_compute[256151]: 2025-12-01 10:23:38.615 256155 DEBUG oslo_concurrency.lockutils [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:23:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:39 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:23:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:39 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:23:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:39 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:23:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:39 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:23:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_10:23:39
Dec  1 05:23:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 05:23:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 05:23:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', '.nfs', 'vms', 'backups']
Dec  1 05:23:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 05:23:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:23:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:23:39 np0005540825 nova_compute[256151]: 2025-12-01 10:23:39.572 256155 DEBUG nova.network.neutron [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Successfully updated port: 1ca40fc4-7826-4815-a0f0-7b7650b2569c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 05:23:39 np0005540825 nova_compute[256151]: 2025-12-01 10:23:39.590 256155 DEBUG oslo_concurrency.lockutils [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "refresh_cache-a0d2df94-256c-4d12-b661-60feb351cd23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 05:23:39 np0005540825 nova_compute[256151]: 2025-12-01 10:23:39.590 256155 DEBUG oslo_concurrency.lockutils [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquired lock "refresh_cache-a0d2df94-256c-4d12-b661-60feb351cd23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 05:23:39 np0005540825 nova_compute[256151]: 2025-12-01 10:23:39.590 256155 DEBUG nova.network.neutron [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 05:23:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:23:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:23:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:23:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:23:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:23:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:23:39 np0005540825 nova_compute[256151]: 2025-12-01 10:23:39.676 256155 DEBUG nova.compute.manager [req-d4ac05db-f4d0-40cf-880c-cef83d8bcfd7 req-a0a3b22f-09b5-4d80-9ffb-ae45bf33e4d2 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Received event network-changed-1ca40fc4-7826-4815-a0f0-7b7650b2569c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:23:39 np0005540825 nova_compute[256151]: 2025-12-01 10:23:39.677 256155 DEBUG nova.compute.manager [req-d4ac05db-f4d0-40cf-880c-cef83d8bcfd7 req-a0a3b22f-09b5-4d80-9ffb-ae45bf33e4d2 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Refreshing instance network info cache due to event network-changed-1ca40fc4-7826-4815-a0f0-7b7650b2569c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 05:23:39 np0005540825 nova_compute[256151]: 2025-12-01 10:23:39.677 256155 DEBUG oslo_concurrency.lockutils [req-d4ac05db-f4d0-40cf-880c-cef83d8bcfd7 req-a0a3b22f-09b5-4d80-9ffb-ae45bf33e4d2 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "refresh_cache-a0d2df94-256c-4d12-b661-60feb351cd23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 05:23:39 np0005540825 nova_compute[256151]: 2025-12-01 10:23:39.830 256155 DEBUG nova.network.neutron [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 05:23:39 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v991: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:23:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 05:23:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:23:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:23:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:23:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:23:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:23:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:23:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:23:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:23:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:23:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  1 05:23:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:23:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:23:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:23:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:23:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:23:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:23:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:23:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:23:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:23:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:23:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:23:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:23:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:23:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:23:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 05:23:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:23:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 05:23:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:23:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:23:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:23:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:23:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:23:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:23:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:23:40 np0005540825 nova_compute[256151]: 2025-12-01 10:23:40.323 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:23:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:23:40.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:23:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:23:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:23:40.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:23:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:23:40 np0005540825 nova_compute[256151]: 2025-12-01 10:23:40.608 256155 DEBUG nova.network.neutron [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Updating instance_info_cache with network_info: [{"id": "1ca40fc4-7826-4815-a0f0-7b7650b2569c", "address": "fa:16:3e:8e:d8:bd", "network": {"id": "88d5f9d7-997a-4f2b-b635-2e7f48a3b027", "bridge": "br-int", "label": "tempest-network-smoke--79469787", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ca40fc4-78", "ovs_interfaceid": "1ca40fc4-7826-4815-a0f0-7b7650b2569c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 05:23:40 np0005540825 nova_compute[256151]: 2025-12-01 10:23:40.628 256155 DEBUG oslo_concurrency.lockutils [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Releasing lock "refresh_cache-a0d2df94-256c-4d12-b661-60feb351cd23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 05:23:40 np0005540825 nova_compute[256151]: 2025-12-01 10:23:40.629 256155 DEBUG nova.compute.manager [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Instance network_info: |[{"id": "1ca40fc4-7826-4815-a0f0-7b7650b2569c", "address": "fa:16:3e:8e:d8:bd", "network": {"id": "88d5f9d7-997a-4f2b-b635-2e7f48a3b027", "bridge": "br-int", "label": "tempest-network-smoke--79469787", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ca40fc4-78", "ovs_interfaceid": "1ca40fc4-7826-4815-a0f0-7b7650b2569c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 05:23:40 np0005540825 nova_compute[256151]: 2025-12-01 10:23:40.629 256155 DEBUG oslo_concurrency.lockutils [req-d4ac05db-f4d0-40cf-880c-cef83d8bcfd7 req-a0a3b22f-09b5-4d80-9ffb-ae45bf33e4d2 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquired lock "refresh_cache-a0d2df94-256c-4d12-b661-60feb351cd23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 05:23:40 np0005540825 nova_compute[256151]: 2025-12-01 10:23:40.629 256155 DEBUG nova.network.neutron [req-d4ac05db-f4d0-40cf-880c-cef83d8bcfd7 req-a0a3b22f-09b5-4d80-9ffb-ae45bf33e4d2 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Refreshing network info cache for port 1ca40fc4-7826-4815-a0f0-7b7650b2569c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 05:23:40 np0005540825 nova_compute[256151]: 2025-12-01 10:23:40.634 256155 DEBUG nova.virt.libvirt.driver [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Start _get_guest_xml network_info=[{"id": "1ca40fc4-7826-4815-a0f0-7b7650b2569c", "address": "fa:16:3e:8e:d8:bd", "network": {"id": "88d5f9d7-997a-4f2b-b635-2e7f48a3b027", "bridge": "br-int", "label": "tempest-network-smoke--79469787", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ca40fc4-78", "ovs_interfaceid": "1ca40fc4-7826-4815-a0f0-7b7650b2569c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T10:14:19Z,direct_url=<?>,disk_format='qcow2',id=8f75d6de-6ce0-44e1-b417-d0111424475b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9a5734898a6345909986f17ddf57b27d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T10:14:22Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': '8f75d6de-6ce0-44e1-b417-d0111424475b'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 05:23:40 np0005540825 nova_compute[256151]: 2025-12-01 10:23:40.643 256155 WARNING nova.virt.libvirt.driver [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 05:23:40 np0005540825 nova_compute[256151]: 2025-12-01 10:23:40.648 256155 DEBUG nova.virt.libvirt.host [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 05:23:40 np0005540825 nova_compute[256151]: 2025-12-01 10:23:40.649 256155 DEBUG nova.virt.libvirt.host [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 05:23:40 np0005540825 nova_compute[256151]: 2025-12-01 10:23:40.654 256155 DEBUG nova.virt.libvirt.host [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 05:23:40 np0005540825 nova_compute[256151]: 2025-12-01 10:23:40.655 256155 DEBUG nova.virt.libvirt.host [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 05:23:40 np0005540825 nova_compute[256151]: 2025-12-01 10:23:40.656 256155 DEBUG nova.virt.libvirt.driver [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 05:23:40 np0005540825 nova_compute[256151]: 2025-12-01 10:23:40.656 256155 DEBUG nova.virt.hardware [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T10:14:17Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2e731827-1896-49cd-b0cc-12903555d217',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T10:14:19Z,direct_url=<?>,disk_format='qcow2',id=8f75d6de-6ce0-44e1-b417-d0111424475b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9a5734898a6345909986f17ddf57b27d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T10:14:22Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 05:23:40 np0005540825 nova_compute[256151]: 2025-12-01 10:23:40.657 256155 DEBUG nova.virt.hardware [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 05:23:40 np0005540825 nova_compute[256151]: 2025-12-01 10:23:40.657 256155 DEBUG nova.virt.hardware [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 05:23:40 np0005540825 nova_compute[256151]: 2025-12-01 10:23:40.658 256155 DEBUG nova.virt.hardware [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 05:23:40 np0005540825 nova_compute[256151]: 2025-12-01 10:23:40.659 256155 DEBUG nova.virt.hardware [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 05:23:40 np0005540825 nova_compute[256151]: 2025-12-01 10:23:40.659 256155 DEBUG nova.virt.hardware [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 05:23:40 np0005540825 nova_compute[256151]: 2025-12-01 10:23:40.660 256155 DEBUG nova.virt.hardware [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 05:23:40 np0005540825 nova_compute[256151]: 2025-12-01 10:23:40.660 256155 DEBUG nova.virt.hardware [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 05:23:40 np0005540825 nova_compute[256151]: 2025-12-01 10:23:40.661 256155 DEBUG nova.virt.hardware [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 05:23:40 np0005540825 nova_compute[256151]: 2025-12-01 10:23:40.661 256155 DEBUG nova.virt.hardware [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 05:23:40 np0005540825 nova_compute[256151]: 2025-12-01 10:23:40.662 256155 DEBUG nova.virt.hardware [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 05:23:40 np0005540825 nova_compute[256151]: 2025-12-01 10:23:40.667 256155 DEBUG oslo_concurrency.processutils [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:23:41 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  1 05:23:41 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2614357069' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  1 05:23:41 np0005540825 nova_compute[256151]: 2025-12-01 10:23:41.155 256155 DEBUG oslo_concurrency.processutils [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:23:41 np0005540825 nova_compute[256151]: 2025-12-01 10:23:41.186 256155 DEBUG nova.storage.rbd_utils [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] rbd image a0d2df94-256c-4d12-b661-60feb351cd23_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  1 05:23:41 np0005540825 nova_compute[256151]: 2025-12-01 10:23:41.191 256155 DEBUG oslo_concurrency.processutils [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:23:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:23:41] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:23:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:23:41] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:23:41 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  1 05:23:41 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/649594026' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  1 05:23:41 np0005540825 nova_compute[256151]: 2025-12-01 10:23:41.675 256155 DEBUG oslo_concurrency.processutils [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:23:41 np0005540825 nova_compute[256151]: 2025-12-01 10:23:41.678 256155 DEBUG nova.virt.libvirt.vif [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T10:23:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-517852973',display_name='tempest-TestNetworkBasicOps-server-517852973',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-517852973',id=10,image_ref='8f75d6de-6ce0-44e1-b417-d0111424475b',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE8f2tMgtdY6uK/TEM/G824tb8XiUTe0AYFCR1sI4EKgZMxehjpRJioEJcBzRvIncR3SkpZWtPTHJ5NBzvJ8NwGHDK3YfhuNmYFLbCp53kUD0BOfGUJC8kaomMCPqNo9EA==',key_name='tempest-TestNetworkBasicOps-2021277470',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9f6be4e572624210b91193c011607c08',ramdisk_id='',reservation_id='r-xxy6holk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8f75d6de-6ce0-44e1-b417-d0111424475b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1248115384',owner_user_name='tempest-TestNetworkBasicOps-1248115384-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T10:23:36Z,user_data=None,user_id='5b56a238daf0445798410e51caada0ff',uuid=a0d2df94-256c-4d12-b661-60feb351cd23,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1ca40fc4-7826-4815-a0f0-7b7650b2569c", "address": "fa:16:3e:8e:d8:bd", "network": {"id": "88d5f9d7-997a-4f2b-b635-2e7f48a3b027", "bridge": "br-int", "label": "tempest-network-smoke--79469787", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ca40fc4-78", "ovs_interfaceid": "1ca40fc4-7826-4815-a0f0-7b7650b2569c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 05:23:41 np0005540825 nova_compute[256151]: 2025-12-01 10:23:41.679 256155 DEBUG nova.network.os_vif_util [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Converting VIF {"id": "1ca40fc4-7826-4815-a0f0-7b7650b2569c", "address": "fa:16:3e:8e:d8:bd", "network": {"id": "88d5f9d7-997a-4f2b-b635-2e7f48a3b027", "bridge": "br-int", "label": "tempest-network-smoke--79469787", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ca40fc4-78", "ovs_interfaceid": "1ca40fc4-7826-4815-a0f0-7b7650b2569c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 05:23:41 np0005540825 nova_compute[256151]: 2025-12-01 10:23:41.680 256155 DEBUG nova.network.os_vif_util [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:d8:bd,bridge_name='br-int',has_traffic_filtering=True,id=1ca40fc4-7826-4815-a0f0-7b7650b2569c,network=Network(88d5f9d7-997a-4f2b-b635-2e7f48a3b027),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ca40fc4-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 05:23:41 np0005540825 nova_compute[256151]: 2025-12-01 10:23:41.683 256155 DEBUG nova.objects.instance [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lazy-loading 'pci_devices' on Instance uuid a0d2df94-256c-4d12-b661-60feb351cd23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 05:23:41 np0005540825 nova_compute[256151]: 2025-12-01 10:23:41.713 256155 DEBUG nova.virt.libvirt.driver [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] End _get_guest_xml xml=<domain type="kvm">
Dec  1 05:23:41 np0005540825 nova_compute[256151]:  <uuid>a0d2df94-256c-4d12-b661-60feb351cd23</uuid>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:  <name>instance-0000000a</name>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:  <memory>131072</memory>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:  <vcpu>1</vcpu>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:  <metadata>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 05:23:41 np0005540825 nova_compute[256151]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:      <nova:name>tempest-TestNetworkBasicOps-server-517852973</nova:name>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:      <nova:creationTime>2025-12-01 10:23:40</nova:creationTime>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:      <nova:flavor name="m1.nano">
Dec  1 05:23:41 np0005540825 nova_compute[256151]:        <nova:memory>128</nova:memory>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:        <nova:disk>1</nova:disk>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:        <nova:swap>0</nova:swap>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:        <nova:ephemeral>0</nova:ephemeral>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:        <nova:vcpus>1</nova:vcpus>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:      </nova:flavor>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:      <nova:owner>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:        <nova:user uuid="5b56a238daf0445798410e51caada0ff">tempest-TestNetworkBasicOps-1248115384-project-member</nova:user>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:        <nova:project uuid="9f6be4e572624210b91193c011607c08">tempest-TestNetworkBasicOps-1248115384</nova:project>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:      </nova:owner>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:      <nova:root type="image" uuid="8f75d6de-6ce0-44e1-b417-d0111424475b"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:      <nova:ports>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:        <nova:port uuid="1ca40fc4-7826-4815-a0f0-7b7650b2569c">
Dec  1 05:23:41 np0005540825 nova_compute[256151]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:        </nova:port>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:      </nova:ports>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    </nova:instance>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:  </metadata>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:  <sysinfo type="smbios">
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <system>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:      <entry name="manufacturer">RDO</entry>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:      <entry name="product">OpenStack Compute</entry>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:      <entry name="serial">a0d2df94-256c-4d12-b661-60feb351cd23</entry>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:      <entry name="uuid">a0d2df94-256c-4d12-b661-60feb351cd23</entry>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:      <entry name="family">Virtual Machine</entry>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    </system>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:  </sysinfo>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:  <os>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <boot dev="hd"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <smbios mode="sysinfo"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:  </os>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:  <features>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <acpi/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <apic/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <vmcoreinfo/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:  </features>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:  <clock offset="utc">
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <timer name="hpet" present="no"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:  </clock>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:  <cpu mode="host-model" match="exact">
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:  </cpu>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:  <devices>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <disk type="network" device="disk">
Dec  1 05:23:41 np0005540825 nova_compute[256151]:      <driver type="raw" cache="none"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:      <source protocol="rbd" name="vms/a0d2df94-256c-4d12-b661-60feb351cd23_disk">
Dec  1 05:23:41 np0005540825 nova_compute[256151]:        <host name="192.168.122.100" port="6789"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:        <host name="192.168.122.102" port="6789"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:        <host name="192.168.122.101" port="6789"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:      </source>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:      <auth username="openstack">
Dec  1 05:23:41 np0005540825 nova_compute[256151]:        <secret type="ceph" uuid="365f19c2-81e5-5edd-b6b4-280555214d3a"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:      </auth>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:      <target dev="vda" bus="virtio"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    </disk>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <disk type="network" device="cdrom">
Dec  1 05:23:41 np0005540825 nova_compute[256151]:      <driver type="raw" cache="none"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:      <source protocol="rbd" name="vms/a0d2df94-256c-4d12-b661-60feb351cd23_disk.config">
Dec  1 05:23:41 np0005540825 nova_compute[256151]:        <host name="192.168.122.100" port="6789"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:        <host name="192.168.122.102" port="6789"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:        <host name="192.168.122.101" port="6789"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:      </source>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:      <auth username="openstack">
Dec  1 05:23:41 np0005540825 nova_compute[256151]:        <secret type="ceph" uuid="365f19c2-81e5-5edd-b6b4-280555214d3a"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:      </auth>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:      <target dev="sda" bus="sata"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    </disk>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <interface type="ethernet">
Dec  1 05:23:41 np0005540825 nova_compute[256151]:      <mac address="fa:16:3e:8e:d8:bd"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:      <model type="virtio"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:      <mtu size="1442"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:      <target dev="tap1ca40fc4-78"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    </interface>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <serial type="pty">
Dec  1 05:23:41 np0005540825 nova_compute[256151]:      <log file="/var/lib/nova/instances/a0d2df94-256c-4d12-b661-60feb351cd23/console.log" append="off"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    </serial>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <video>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:      <model type="virtio"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    </video>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <input type="tablet" bus="usb"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <rng model="virtio">
Dec  1 05:23:41 np0005540825 nova_compute[256151]:      <backend model="random">/dev/urandom</backend>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    </rng>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <controller type="usb" index="0"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    <memballoon model="virtio">
Dec  1 05:23:41 np0005540825 nova_compute[256151]:      <stats period="10"/>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:    </memballoon>
Dec  1 05:23:41 np0005540825 nova_compute[256151]:  </devices>
Dec  1 05:23:41 np0005540825 nova_compute[256151]: </domain>
Dec  1 05:23:41 np0005540825 nova_compute[256151]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 05:23:41 np0005540825 nova_compute[256151]: 2025-12-01 10:23:41.714 256155 DEBUG nova.compute.manager [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Preparing to wait for external event network-vif-plugged-1ca40fc4-7826-4815-a0f0-7b7650b2569c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 05:23:41 np0005540825 nova_compute[256151]: 2025-12-01 10:23:41.715 256155 DEBUG oslo_concurrency.lockutils [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "a0d2df94-256c-4d12-b661-60feb351cd23-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:23:41 np0005540825 nova_compute[256151]: 2025-12-01 10:23:41.715 256155 DEBUG oslo_concurrency.lockutils [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "a0d2df94-256c-4d12-b661-60feb351cd23-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:23:41 np0005540825 nova_compute[256151]: 2025-12-01 10:23:41.715 256155 DEBUG oslo_concurrency.lockutils [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "a0d2df94-256c-4d12-b661-60feb351cd23-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:23:41 np0005540825 nova_compute[256151]: 2025-12-01 10:23:41.716 256155 DEBUG nova.virt.libvirt.vif [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T10:23:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-517852973',display_name='tempest-TestNetworkBasicOps-server-517852973',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-517852973',id=10,image_ref='8f75d6de-6ce0-44e1-b417-d0111424475b',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE8f2tMgtdY6uK/TEM/G824tb8XiUTe0AYFCR1sI4EKgZMxehjpRJioEJcBzRvIncR3SkpZWtPTHJ5NBzvJ8NwGHDK3YfhuNmYFLbCp53kUD0BOfGUJC8kaomMCPqNo9EA==',key_name='tempest-TestNetworkBasicOps-2021277470',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9f6be4e572624210b91193c011607c08',ramdisk_id='',reservation_id='r-xxy6holk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8f75d6de-6ce0-44e1-b417-d0111424475b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1248115384',owner_user_name='tempest-TestNetworkBasicOps-1248115384-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T10:23:36Z,user_data=None,user_id='5b56a238daf0445798410e51caada0ff',uuid=a0d2df94-256c-4d12-b661-60feb351cd23,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1ca40fc4-7826-4815-a0f0-7b7650b2569c", "address": "fa:16:3e:8e:d8:bd", "network": {"id": "88d5f9d7-997a-4f2b-b635-2e7f48a3b027", "bridge": "br-int", "label": "tempest-network-smoke--79469787", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ca40fc4-78", "ovs_interfaceid": "1ca40fc4-7826-4815-a0f0-7b7650b2569c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 05:23:41 np0005540825 nova_compute[256151]: 2025-12-01 10:23:41.716 256155 DEBUG nova.network.os_vif_util [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Converting VIF {"id": "1ca40fc4-7826-4815-a0f0-7b7650b2569c", "address": "fa:16:3e:8e:d8:bd", "network": {"id": "88d5f9d7-997a-4f2b-b635-2e7f48a3b027", "bridge": "br-int", "label": "tempest-network-smoke--79469787", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ca40fc4-78", "ovs_interfaceid": "1ca40fc4-7826-4815-a0f0-7b7650b2569c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 05:23:41 np0005540825 nova_compute[256151]: 2025-12-01 10:23:41.717 256155 DEBUG nova.network.os_vif_util [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:d8:bd,bridge_name='br-int',has_traffic_filtering=True,id=1ca40fc4-7826-4815-a0f0-7b7650b2569c,network=Network(88d5f9d7-997a-4f2b-b635-2e7f48a3b027),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ca40fc4-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 05:23:41 np0005540825 nova_compute[256151]: 2025-12-01 10:23:41.717 256155 DEBUG os_vif [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:d8:bd,bridge_name='br-int',has_traffic_filtering=True,id=1ca40fc4-7826-4815-a0f0-7b7650b2569c,network=Network(88d5f9d7-997a-4f2b-b635-2e7f48a3b027),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ca40fc4-78') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 05:23:41 np0005540825 nova_compute[256151]: 2025-12-01 10:23:41.718 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:41 np0005540825 nova_compute[256151]: 2025-12-01 10:23:41.718 256155 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:23:41 np0005540825 nova_compute[256151]: 2025-12-01 10:23:41.718 256155 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 05:23:41 np0005540825 nova_compute[256151]: 2025-12-01 10:23:41.722 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:41 np0005540825 nova_compute[256151]: 2025-12-01 10:23:41.722 256155 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ca40fc4-78, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:23:41 np0005540825 nova_compute[256151]: 2025-12-01 10:23:41.722 256155 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1ca40fc4-78, col_values=(('external_ids', {'iface-id': '1ca40fc4-7826-4815-a0f0-7b7650b2569c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8e:d8:bd', 'vm-uuid': 'a0d2df94-256c-4d12-b661-60feb351cd23'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:23:41 np0005540825 nova_compute[256151]: 2025-12-01 10:23:41.724 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:41 np0005540825 NetworkManager[48963]: <info>  [1764584621.7248] manager: (tap1ca40fc4-78): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Dec  1 05:23:41 np0005540825 nova_compute[256151]: 2025-12-01 10:23:41.726 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 05:23:41 np0005540825 nova_compute[256151]: 2025-12-01 10:23:41.733 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:41 np0005540825 nova_compute[256151]: 2025-12-01 10:23:41.735 256155 INFO os_vif [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:d8:bd,bridge_name='br-int',has_traffic_filtering=True,id=1ca40fc4-7826-4815-a0f0-7b7650b2569c,network=Network(88d5f9d7-997a-4f2b-b635-2e7f48a3b027),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ca40fc4-78')#033[00m
Dec  1 05:23:41 np0005540825 nova_compute[256151]: 2025-12-01 10:23:41.809 256155 DEBUG nova.virt.libvirt.driver [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 05:23:41 np0005540825 nova_compute[256151]: 2025-12-01 10:23:41.809 256155 DEBUG nova.virt.libvirt.driver [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 05:23:41 np0005540825 nova_compute[256151]: 2025-12-01 10:23:41.810 256155 DEBUG nova.virt.libvirt.driver [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] No VIF found with MAC fa:16:3e:8e:d8:bd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 05:23:41 np0005540825 nova_compute[256151]: 2025-12-01 10:23:41.810 256155 INFO nova.virt.libvirt.driver [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Using config drive#033[00m
Dec  1 05:23:41 np0005540825 nova_compute[256151]: 2025-12-01 10:23:41.842 256155 DEBUG nova.storage.rbd_utils [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] rbd image a0d2df94-256c-4d12-b661-60feb351cd23_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  1 05:23:41 np0005540825 nova_compute[256151]: 2025-12-01 10:23:41.849 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:41 np0005540825 nova_compute[256151]: 2025-12-01 10:23:41.867 256155 DEBUG nova.network.neutron [req-d4ac05db-f4d0-40cf-880c-cef83d8bcfd7 req-a0a3b22f-09b5-4d80-9ffb-ae45bf33e4d2 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Updated VIF entry in instance network info cache for port 1ca40fc4-7826-4815-a0f0-7b7650b2569c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 05:23:41 np0005540825 nova_compute[256151]: 2025-12-01 10:23:41.868 256155 DEBUG nova.network.neutron [req-d4ac05db-f4d0-40cf-880c-cef83d8bcfd7 req-a0a3b22f-09b5-4d80-9ffb-ae45bf33e4d2 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Updating instance_info_cache with network_info: [{"id": "1ca40fc4-7826-4815-a0f0-7b7650b2569c", "address": "fa:16:3e:8e:d8:bd", "network": {"id": "88d5f9d7-997a-4f2b-b635-2e7f48a3b027", "bridge": "br-int", "label": "tempest-network-smoke--79469787", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ca40fc4-78", "ovs_interfaceid": "1ca40fc4-7826-4815-a0f0-7b7650b2569c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 05:23:41 np0005540825 nova_compute[256151]: 2025-12-01 10:23:41.884 256155 DEBUG oslo_concurrency.lockutils [req-d4ac05db-f4d0-40cf-880c-cef83d8bcfd7 req-a0a3b22f-09b5-4d80-9ffb-ae45bf33e4d2 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Releasing lock "refresh_cache-a0d2df94-256c-4d12-b661-60feb351cd23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 05:23:41 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v992: 353 pgs: 353 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  1 05:23:42 np0005540825 nova_compute[256151]: 2025-12-01 10:23:42.133 256155 INFO nova.virt.libvirt.driver [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Creating config drive at /var/lib/nova/instances/a0d2df94-256c-4d12-b661-60feb351cd23/disk.config#033[00m
Dec  1 05:23:42 np0005540825 nova_compute[256151]: 2025-12-01 10:23:42.142 256155 DEBUG oslo_concurrency.processutils [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a0d2df94-256c-4d12-b661-60feb351cd23/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4nrwskvx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:23:42 np0005540825 nova_compute[256151]: 2025-12-01 10:23:42.269 256155 DEBUG oslo_concurrency.processutils [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a0d2df94-256c-4d12-b661-60feb351cd23/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4nrwskvx" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:23:42 np0005540825 nova_compute[256151]: 2025-12-01 10:23:42.298 256155 DEBUG nova.storage.rbd_utils [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] rbd image a0d2df94-256c-4d12-b661-60feb351cd23_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  1 05:23:42 np0005540825 nova_compute[256151]: 2025-12-01 10:23:42.302 256155 DEBUG oslo_concurrency.processutils [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a0d2df94-256c-4d12-b661-60feb351cd23/disk.config a0d2df94-256c-4d12-b661-60feb351cd23_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:23:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:23:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:23:42.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:23:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:23:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:23:42.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:23:43 np0005540825 nova_compute[256151]: 2025-12-01 10:23:43.308 256155 DEBUG oslo_concurrency.processutils [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a0d2df94-256c-4d12-b661-60feb351cd23/disk.config a0d2df94-256c-4d12-b661-60feb351cd23_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:23:43 np0005540825 nova_compute[256151]: 2025-12-01 10:23:43.309 256155 INFO nova.virt.libvirt.driver [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Deleting local config drive /var/lib/nova/instances/a0d2df94-256c-4d12-b661-60feb351cd23/disk.config because it was imported into RBD.#033[00m
Dec  1 05:23:43 np0005540825 kernel: tap1ca40fc4-78: entered promiscuous mode
Dec  1 05:23:43 np0005540825 NetworkManager[48963]: <info>  [1764584623.3716] manager: (tap1ca40fc4-78): new Tun device (/org/freedesktop/NetworkManager/Devices/49)
Dec  1 05:23:43 np0005540825 ovn_controller[153404]: 2025-12-01T10:23:43Z|00068|binding|INFO|Claiming lport 1ca40fc4-7826-4815-a0f0-7b7650b2569c for this chassis.
Dec  1 05:23:43 np0005540825 ovn_controller[153404]: 2025-12-01T10:23:43Z|00069|binding|INFO|1ca40fc4-7826-4815-a0f0-7b7650b2569c: Claiming fa:16:3e:8e:d8:bd 10.100.0.13
Dec  1 05:23:43 np0005540825 nova_compute[256151]: 2025-12-01 10:23:43.374 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:43 np0005540825 nova_compute[256151]: 2025-12-01 10:23:43.382 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:43 np0005540825 nova_compute[256151]: 2025-12-01 10:23:43.386 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:43.403 163291 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:d8:bd 10.100.0.13'], port_security=['fa:16:3e:8e:d8:bd 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'a0d2df94-256c-4d12-b661-60feb351cd23', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88d5f9d7-997a-4f2b-b635-2e7f48a3b027', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f6be4e572624210b91193c011607c08', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3329977c-bebc-4580-be9d-02d5bf17e4f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b94d8cd4-086c-4f0e-aa55-7d70d05d5d6e, chassis=[<ovs.db.idl.Row object at 0x7f3429b436d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3429b436d0>], logical_port=1ca40fc4-7826-4815-a0f0-7b7650b2569c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:43.406 163291 INFO neutron.agent.ovn.metadata.agent [-] Port 1ca40fc4-7826-4815-a0f0-7b7650b2569c in datapath 88d5f9d7-997a-4f2b-b635-2e7f48a3b027 bound to our chassis#033[00m
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:43.408 163291 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 88d5f9d7-997a-4f2b-b635-2e7f48a3b027#033[00m
Dec  1 05:23:43 np0005540825 systemd-machined[216307]: New machine qemu-5-instance-0000000a.
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:43.426 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[9e65cfb1-cd0d-4375-b040-9bf3174147ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:43.427 163291 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap88d5f9d7-91 in ovnmeta-88d5f9d7-997a-4f2b-b635-2e7f48a3b027 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:43.430 262668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap88d5f9d7-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:43.430 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[d08e6ccb-3ee2-4717-8b57-5a2a48ef12a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:43.431 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[64993d8d-9786-4fba-8f6d-4310d68100ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:23:43 np0005540825 systemd[1]: Started Virtual Machine qemu-5-instance-0000000a.
Dec  1 05:23:43 np0005540825 systemd-udevd[274966]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:43.451 163408 DEBUG oslo.privsep.daemon [-] privsep: reply[cc66e482-cb46-4e2f-b53e-ad297e2e8085]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:23:43 np0005540825 NetworkManager[48963]: <info>  [1764584623.4646] device (tap1ca40fc4-78): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 05:23:43 np0005540825 NetworkManager[48963]: <info>  [1764584623.4659] device (tap1ca40fc4-78): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 05:23:43 np0005540825 nova_compute[256151]: 2025-12-01 10:23:43.483 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:43 np0005540825 ovn_controller[153404]: 2025-12-01T10:23:43Z|00070|binding|INFO|Setting lport 1ca40fc4-7826-4815-a0f0-7b7650b2569c ovn-installed in OVS
Dec  1 05:23:43 np0005540825 ovn_controller[153404]: 2025-12-01T10:23:43Z|00071|binding|INFO|Setting lport 1ca40fc4-7826-4815-a0f0-7b7650b2569c up in Southbound
Dec  1 05:23:43 np0005540825 nova_compute[256151]: 2025-12-01 10:23:43.487 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:43.491 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[833c8271-cdbf-48cd-8514-0b6a8a1ce0a3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:43.530 262728 DEBUG oslo.privsep.daemon [-] privsep: reply[750e8982-140b-4bbb-a8ba-158f53ed14f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:43.536 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[88940899-7fbe-4d52-aa77-9036ff9003bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:23:43 np0005540825 NetworkManager[48963]: <info>  [1764584623.5379] manager: (tap88d5f9d7-90): new Veth device (/org/freedesktop/NetworkManager/Devices/50)
Dec  1 05:23:43 np0005540825 systemd-udevd[274969]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:43.576 262728 DEBUG oslo.privsep.daemon [-] privsep: reply[f0a372fd-1338-4e8e-beb3-a445781944a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:43.579 262728 DEBUG oslo.privsep.daemon [-] privsep: reply[08ff182d-c78c-4284-a25b-7a2fce6f8eb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:23:43 np0005540825 NetworkManager[48963]: <info>  [1764584623.6081] device (tap88d5f9d7-90): carrier: link connected
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:43.617 262728 DEBUG oslo.privsep.daemon [-] privsep: reply[22d5b643-04dd-4d87-9254-dbd2defd8bd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:43.637 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[3656a2f4-029d-4564-b68d-aef1d2eebec9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap88d5f9d7-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:56:c4:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449224, 'reachable_time': 27054, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274998, 'error': None, 'target': 'ovnmeta-88d5f9d7-997a-4f2b-b635-2e7f48a3b027', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:43.656 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[60170eb6-ef25-483b-8b8c-5e0e12686385]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe56:c48f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 449224, 'tstamp': 449224}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274999, 'error': None, 'target': 'ovnmeta-88d5f9d7-997a-4f2b-b635-2e7f48a3b027', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:43.678 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[ebb24dcd-5377-4534-8691-cac2008b8421]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap88d5f9d7-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:56:c4:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449224, 'reachable_time': 27054, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 275001, 'error': None, 'target': 'ovnmeta-88d5f9d7-997a-4f2b-b635-2e7f48a3b027', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:23:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:23:43.695Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:23:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:23:43.695Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:23:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:23:43.695Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:43.711 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[46d2254d-f079-4786-bbf8-27d31c13f916]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:43.769 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[c3ae88ff-81dd-4839-9498-4686cdf4cae0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:43.770 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88d5f9d7-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:43.770 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:43.771 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap88d5f9d7-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:23:43 np0005540825 nova_compute[256151]: 2025-12-01 10:23:43.772 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:43 np0005540825 NetworkManager[48963]: <info>  [1764584623.7736] manager: (tap88d5f9d7-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Dec  1 05:23:43 np0005540825 kernel: tap88d5f9d7-90: entered promiscuous mode
Dec  1 05:23:43 np0005540825 nova_compute[256151]: 2025-12-01 10:23:43.777 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:43.777 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap88d5f9d7-90, col_values=(('external_ids', {'iface-id': '7f4715cd-bac9-4d1c-ac0e-14357d4031d7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:23:43 np0005540825 nova_compute[256151]: 2025-12-01 10:23:43.778 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:43 np0005540825 ovn_controller[153404]: 2025-12-01T10:23:43Z|00072|binding|INFO|Releasing lport 7f4715cd-bac9-4d1c-ac0e-14357d4031d7 from this chassis (sb_readonly=0)
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:43.798 163291 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/88d5f9d7-997a-4f2b-b635-2e7f48a3b027.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/88d5f9d7-997a-4f2b-b635-2e7f48a3b027.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  1 05:23:43 np0005540825 nova_compute[256151]: 2025-12-01 10:23:43.797 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:43.802 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[2aa235e1-03cf-4052-a23f-210d6a05b8b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:43.803 163291 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]: global
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]:    log         /dev/log local0 debug
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]:    log-tag     haproxy-metadata-proxy-88d5f9d7-997a-4f2b-b635-2e7f48a3b027
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]:    user        root
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]:    group       root
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]:    maxconn     1024
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]:    pidfile     /var/lib/neutron/external/pids/88d5f9d7-997a-4f2b-b635-2e7f48a3b027.pid.haproxy
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]:    daemon
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]: 
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]: defaults
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]:    log global
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]:    mode http
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]:    option httplog
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]:    option dontlognull
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]:    option http-server-close
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]:    option forwardfor
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]:    retries                 3
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]:    timeout http-request    30s
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]:    timeout connect         30s
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]:    timeout client          32s
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]:    timeout server          32s
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]:    timeout http-keep-alive 30s
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]: 
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]: 
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]: listen listener
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]:    bind 169.254.169.254:80
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]:    server metadata /var/lib/neutron/metadata_proxy
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]:    http-request add-header X-OVN-Network-ID 88d5f9d7-997a-4f2b-b635-2e7f48a3b027
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  1 05:23:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:23:43.803 163291 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-88d5f9d7-997a-4f2b-b635-2e7f48a3b027', 'env', 'PROCESS_TAG=haproxy-88d5f9d7-997a-4f2b-b635-2e7f48a3b027', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/88d5f9d7-997a-4f2b-b635-2e7f48a3b027.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  1 05:23:43 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v993: 353 pgs: 353 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  1 05:23:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:23:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:23:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:44 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:23:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:44 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:23:44 np0005540825 nova_compute[256151]: 2025-12-01 10:23:44.010 256155 DEBUG nova.compute.manager [req-8a286222-5829-448b-9aa8-172df965f3e9 req-47454da1-d08f-4102-b9ed-8454cd122ecd dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Received event network-vif-plugged-1ca40fc4-7826-4815-a0f0-7b7650b2569c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:23:44 np0005540825 nova_compute[256151]: 2025-12-01 10:23:44.011 256155 DEBUG oslo_concurrency.lockutils [req-8a286222-5829-448b-9aa8-172df965f3e9 req-47454da1-d08f-4102-b9ed-8454cd122ecd dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "a0d2df94-256c-4d12-b661-60feb351cd23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:23:44 np0005540825 nova_compute[256151]: 2025-12-01 10:23:44.012 256155 DEBUG oslo_concurrency.lockutils [req-8a286222-5829-448b-9aa8-172df965f3e9 req-47454da1-d08f-4102-b9ed-8454cd122ecd dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "a0d2df94-256c-4d12-b661-60feb351cd23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:23:44 np0005540825 nova_compute[256151]: 2025-12-01 10:23:44.013 256155 DEBUG oslo_concurrency.lockutils [req-8a286222-5829-448b-9aa8-172df965f3e9 req-47454da1-d08f-4102-b9ed-8454cd122ecd dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "a0d2df94-256c-4d12-b661-60feb351cd23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:23:44 np0005540825 nova_compute[256151]: 2025-12-01 10:23:44.013 256155 DEBUG nova.compute.manager [req-8a286222-5829-448b-9aa8-172df965f3e9 req-47454da1-d08f-4102-b9ed-8454cd122ecd dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Processing event network-vif-plugged-1ca40fc4-7826-4815-a0f0-7b7650b2569c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 05:23:44 np0005540825 nova_compute[256151]: 2025-12-01 10:23:44.119 256155 DEBUG nova.virt.driver [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] Emitting event <LifecycleEvent: 1764584624.1184406, a0d2df94-256c-4d12-b661-60feb351cd23 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 05:23:44 np0005540825 nova_compute[256151]: 2025-12-01 10:23:44.121 256155 INFO nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] VM Started (Lifecycle Event)#033[00m
Dec  1 05:23:44 np0005540825 nova_compute[256151]: 2025-12-01 10:23:44.124 256155 DEBUG nova.compute.manager [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 05:23:44 np0005540825 nova_compute[256151]: 2025-12-01 10:23:44.127 256155 DEBUG nova.virt.libvirt.driver [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 05:23:44 np0005540825 nova_compute[256151]: 2025-12-01 10:23:44.131 256155 INFO nova.virt.libvirt.driver [-] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Instance spawned successfully.#033[00m
Dec  1 05:23:44 np0005540825 nova_compute[256151]: 2025-12-01 10:23:44.131 256155 DEBUG nova.virt.libvirt.driver [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 05:23:44 np0005540825 nova_compute[256151]: 2025-12-01 10:23:44.151 256155 DEBUG nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 05:23:44 np0005540825 nova_compute[256151]: 2025-12-01 10:23:44.158 256155 DEBUG nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 05:23:44 np0005540825 nova_compute[256151]: 2025-12-01 10:23:44.161 256155 DEBUG nova.virt.libvirt.driver [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 05:23:44 np0005540825 nova_compute[256151]: 2025-12-01 10:23:44.162 256155 DEBUG nova.virt.libvirt.driver [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 05:23:44 np0005540825 nova_compute[256151]: 2025-12-01 10:23:44.163 256155 DEBUG nova.virt.libvirt.driver [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 05:23:44 np0005540825 nova_compute[256151]: 2025-12-01 10:23:44.164 256155 DEBUG nova.virt.libvirt.driver [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 05:23:44 np0005540825 nova_compute[256151]: 2025-12-01 10:23:44.165 256155 DEBUG nova.virt.libvirt.driver [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 05:23:44 np0005540825 nova_compute[256151]: 2025-12-01 10:23:44.166 256155 DEBUG nova.virt.libvirt.driver [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 05:23:44 np0005540825 nova_compute[256151]: 2025-12-01 10:23:44.214 256155 INFO nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 05:23:44 np0005540825 nova_compute[256151]: 2025-12-01 10:23:44.215 256155 DEBUG nova.virt.driver [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] Emitting event <LifecycleEvent: 1764584624.1192226, a0d2df94-256c-4d12-b661-60feb351cd23 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 05:23:44 np0005540825 nova_compute[256151]: 2025-12-01 10:23:44.216 256155 INFO nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] VM Paused (Lifecycle Event)#033[00m
Dec  1 05:23:44 np0005540825 nova_compute[256151]: 2025-12-01 10:23:44.251 256155 DEBUG nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 05:23:44 np0005540825 nova_compute[256151]: 2025-12-01 10:23:44.260 256155 DEBUG nova.virt.driver [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] Emitting event <LifecycleEvent: 1764584624.1267943, a0d2df94-256c-4d12-b661-60feb351cd23 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 05:23:44 np0005540825 nova_compute[256151]: 2025-12-01 10:23:44.260 256155 INFO nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] VM Resumed (Lifecycle Event)#033[00m
Dec  1 05:23:44 np0005540825 nova_compute[256151]: 2025-12-01 10:23:44.267 256155 INFO nova.compute.manager [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Took 7.25 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 05:23:44 np0005540825 nova_compute[256151]: 2025-12-01 10:23:44.268 256155 DEBUG nova.compute.manager [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 05:23:44 np0005540825 podman[275076]: 2025-12-01 10:23:44.192111006 +0000 UTC m=+0.033682036 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 05:23:44 np0005540825 nova_compute[256151]: 2025-12-01 10:23:44.309 256155 DEBUG nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 05:23:44 np0005540825 nova_compute[256151]: 2025-12-01 10:23:44.313 256155 DEBUG nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 05:23:44 np0005540825 nova_compute[256151]: 2025-12-01 10:23:44.375 256155 INFO nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 05:23:44 np0005540825 nova_compute[256151]: 2025-12-01 10:23:44.391 256155 INFO nova.compute.manager [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Took 8.40 seconds to build instance.#033[00m
Dec  1 05:23:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:23:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:23:44.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:23:44 np0005540825 podman[275076]: 2025-12-01 10:23:44.410685149 +0000 UTC m=+0.252256159 container create 67de77b7598a576ed26027c3a01bc4edc696b0d73d8081f8e72634e1ff75ad4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88d5f9d7-997a-4f2b-b635-2e7f48a3b027, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  1 05:23:44 np0005540825 nova_compute[256151]: 2025-12-01 10:23:44.419 256155 DEBUG oslo_concurrency.lockutils [None req-9c9a6f7a-b87c-4c52-bcaa-5f8520d88fce 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "a0d2df94-256c-4d12-b661-60feb351cd23" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.501s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:23:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:23:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:23:44.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:23:44 np0005540825 systemd[1]: Started libpod-conmon-67de77b7598a576ed26027c3a01bc4edc696b0d73d8081f8e72634e1ff75ad4b.scope.
Dec  1 05:23:44 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:23:44 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ae7b2f31508576a7e44ea65af091fe8735070f1fb2e3a9e220ac63a92f8c53/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  1 05:23:44 np0005540825 podman[275076]: 2025-12-01 10:23:44.544141975 +0000 UTC m=+0.385713095 container init 67de77b7598a576ed26027c3a01bc4edc696b0d73d8081f8e72634e1ff75ad4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88d5f9d7-997a-4f2b-b635-2e7f48a3b027, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  1 05:23:44 np0005540825 podman[275076]: 2025-12-01 10:23:44.556772094 +0000 UTC m=+0.398343144 container start 67de77b7598a576ed26027c3a01bc4edc696b0d73d8081f8e72634e1ff75ad4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88d5f9d7-997a-4f2b-b635-2e7f48a3b027, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  1 05:23:44 np0005540825 neutron-haproxy-ovnmeta-88d5f9d7-997a-4f2b-b635-2e7f48a3b027[275091]: [NOTICE]   (275095) : New worker (275097) forked
Dec  1 05:23:44 np0005540825 neutron-haproxy-ovnmeta-88d5f9d7-997a-4f2b-b635-2e7f48a3b027[275091]: [NOTICE]   (275095) : Loading success.
Dec  1 05:23:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:23:45 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v994: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Dec  1 05:23:46 np0005540825 nova_compute[256151]: 2025-12-01 10:23:46.141 256155 DEBUG nova.compute.manager [req-086e52dd-923e-4d4c-a3e7-22444d4c6479 req-fde869fb-5564-4c8e-9732-024e9353192c dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Received event network-vif-plugged-1ca40fc4-7826-4815-a0f0-7b7650b2569c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:23:46 np0005540825 nova_compute[256151]: 2025-12-01 10:23:46.142 256155 DEBUG oslo_concurrency.lockutils [req-086e52dd-923e-4d4c-a3e7-22444d4c6479 req-fde869fb-5564-4c8e-9732-024e9353192c dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "a0d2df94-256c-4d12-b661-60feb351cd23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:23:46 np0005540825 nova_compute[256151]: 2025-12-01 10:23:46.142 256155 DEBUG oslo_concurrency.lockutils [req-086e52dd-923e-4d4c-a3e7-22444d4c6479 req-fde869fb-5564-4c8e-9732-024e9353192c dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "a0d2df94-256c-4d12-b661-60feb351cd23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:23:46 np0005540825 nova_compute[256151]: 2025-12-01 10:23:46.143 256155 DEBUG oslo_concurrency.lockutils [req-086e52dd-923e-4d4c-a3e7-22444d4c6479 req-fde869fb-5564-4c8e-9732-024e9353192c dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "a0d2df94-256c-4d12-b661-60feb351cd23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:23:46 np0005540825 nova_compute[256151]: 2025-12-01 10:23:46.143 256155 DEBUG nova.compute.manager [req-086e52dd-923e-4d4c-a3e7-22444d4c6479 req-fde869fb-5564-4c8e-9732-024e9353192c dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] No waiting events found dispatching network-vif-plugged-1ca40fc4-7826-4815-a0f0-7b7650b2569c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 05:23:46 np0005540825 nova_compute[256151]: 2025-12-01 10:23:46.144 256155 WARNING nova.compute.manager [req-086e52dd-923e-4d4c-a3e7-22444d4c6479 req-fde869fb-5564-4c8e-9732-024e9353192c dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Received unexpected event network-vif-plugged-1ca40fc4-7826-4815-a0f0-7b7650b2569c for instance with vm_state active and task_state None.#033[00m
Dec  1 05:23:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:23:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:23:46.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:23:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:23:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:23:46.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:23:46 np0005540825 nova_compute[256151]: 2025-12-01 10:23:46.738 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:46 np0005540825 nova_compute[256151]: 2025-12-01 10:23:46.834 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:23:47.252Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:23:47 np0005540825 podman[275133]: 2025-12-01 10:23:47.274692218 +0000 UTC m=+0.132440330 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  1 05:23:47 np0005540825 NetworkManager[48963]: <info>  [1764584627.4727] manager: (patch-provnet-da274a4a-a49c-4f01-b728-391696cd2672-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Dec  1 05:23:47 np0005540825 NetworkManager[48963]: <info>  [1764584627.4735] manager: (patch-br-int-to-provnet-da274a4a-a49c-4f01-b728-391696cd2672): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Dec  1 05:23:47 np0005540825 ovn_controller[153404]: 2025-12-01T10:23:47Z|00073|binding|INFO|Releasing lport 7f4715cd-bac9-4d1c-ac0e-14357d4031d7 from this chassis (sb_readonly=0)
Dec  1 05:23:47 np0005540825 nova_compute[256151]: 2025-12-01 10:23:47.475 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:47 np0005540825 ovn_controller[153404]: 2025-12-01T10:23:47Z|00074|binding|INFO|Releasing lport 7f4715cd-bac9-4d1c-ac0e-14357d4031d7 from this chassis (sb_readonly=0)
Dec  1 05:23:47 np0005540825 nova_compute[256151]: 2025-12-01 10:23:47.510 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:47 np0005540825 nova_compute[256151]: 2025-12-01 10:23:47.514 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:47 np0005540825 nova_compute[256151]: 2025-12-01 10:23:47.783 256155 DEBUG nova.compute.manager [req-e446d30f-62ae-49b2-aaae-3aa19a769a95 req-815aa50d-938c-420b-9215-482133095171 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Received event network-changed-1ca40fc4-7826-4815-a0f0-7b7650b2569c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:23:47 np0005540825 nova_compute[256151]: 2025-12-01 10:23:47.784 256155 DEBUG nova.compute.manager [req-e446d30f-62ae-49b2-aaae-3aa19a769a95 req-815aa50d-938c-420b-9215-482133095171 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Refreshing instance network info cache due to event network-changed-1ca40fc4-7826-4815-a0f0-7b7650b2569c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 05:23:47 np0005540825 nova_compute[256151]: 2025-12-01 10:23:47.784 256155 DEBUG oslo_concurrency.lockutils [req-e446d30f-62ae-49b2-aaae-3aa19a769a95 req-815aa50d-938c-420b-9215-482133095171 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "refresh_cache-a0d2df94-256c-4d12-b661-60feb351cd23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 05:23:47 np0005540825 nova_compute[256151]: 2025-12-01 10:23:47.785 256155 DEBUG oslo_concurrency.lockutils [req-e446d30f-62ae-49b2-aaae-3aa19a769a95 req-815aa50d-938c-420b-9215-482133095171 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquired lock "refresh_cache-a0d2df94-256c-4d12-b661-60feb351cd23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 05:23:47 np0005540825 nova_compute[256151]: 2025-12-01 10:23:47.786 256155 DEBUG nova.network.neutron [req-e446d30f-62ae-49b2-aaae-3aa19a769a95 req-815aa50d-938c-420b-9215-482133095171 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Refreshing network info cache for port 1ca40fc4-7826-4815-a0f0-7b7650b2569c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 05:23:47 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v995: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Dec  1 05:23:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:23:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:23:48.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:23:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:23:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:23:48.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:23:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:23:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:23:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:23:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:49 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:23:49 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v996: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Dec  1 05:23:50 np0005540825 nova_compute[256151]: 2025-12-01 10:23:50.285 256155 DEBUG nova.network.neutron [req-e446d30f-62ae-49b2-aaae-3aa19a769a95 req-815aa50d-938c-420b-9215-482133095171 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Updated VIF entry in instance network info cache for port 1ca40fc4-7826-4815-a0f0-7b7650b2569c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 05:23:50 np0005540825 nova_compute[256151]: 2025-12-01 10:23:50.285 256155 DEBUG nova.network.neutron [req-e446d30f-62ae-49b2-aaae-3aa19a769a95 req-815aa50d-938c-420b-9215-482133095171 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Updating instance_info_cache with network_info: [{"id": "1ca40fc4-7826-4815-a0f0-7b7650b2569c", "address": "fa:16:3e:8e:d8:bd", "network": {"id": "88d5f9d7-997a-4f2b-b635-2e7f48a3b027", "bridge": "br-int", "label": "tempest-network-smoke--79469787", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ca40fc4-78", "ovs_interfaceid": "1ca40fc4-7826-4815-a0f0-7b7650b2569c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 05:23:50 np0005540825 nova_compute[256151]: 2025-12-01 10:23:50.308 256155 DEBUG oslo_concurrency.lockutils [req-e446d30f-62ae-49b2-aaae-3aa19a769a95 req-815aa50d-938c-420b-9215-482133095171 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Releasing lock "refresh_cache-a0d2df94-256c-4d12-b661-60feb351cd23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 05:23:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:23:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:23:50.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:23:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:23:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:23:50.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:23:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:23:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:23:51] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:23:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:23:51] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:23:51 np0005540825 nova_compute[256151]: 2025-12-01 10:23:51.741 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:51 np0005540825 nova_compute[256151]: 2025-12-01 10:23:51.835 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:51 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v997: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec  1 05:23:52 np0005540825 nova_compute[256151]: 2025-12-01 10:23:52.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:23:52 np0005540825 nova_compute[256151]: 2025-12-01 10:23:52.027 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 05:23:52 np0005540825 nova_compute[256151]: 2025-12-01 10:23:52.027 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 05:23:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:23:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:23:52.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:23:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:23:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:23:52.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:23:52 np0005540825 nova_compute[256151]: 2025-12-01 10:23:52.823 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "refresh_cache-a0d2df94-256c-4d12-b661-60feb351cd23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 05:23:52 np0005540825 nova_compute[256151]: 2025-12-01 10:23:52.823 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquired lock "refresh_cache-a0d2df94-256c-4d12-b661-60feb351cd23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 05:23:52 np0005540825 nova_compute[256151]: 2025-12-01 10:23:52.824 256155 DEBUG nova.network.neutron [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 05:23:52 np0005540825 nova_compute[256151]: 2025-12-01 10:23:52.824 256155 DEBUG nova.objects.instance [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a0d2df94-256c-4d12-b661-60feb351cd23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 05:23:53 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec  1 05:23:53 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  1 05:23:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:23:53.696Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:23:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:23:53.697Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:23:53 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v998: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec  1 05:23:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:23:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:23:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:23:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:54 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:23:54 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  1 05:23:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:23:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:23:54.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:23:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:23:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:23:54.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:23:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:23:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:23:54 np0005540825 nova_compute[256151]: 2025-12-01 10:23:54.641 256155 DEBUG nova.network.neutron [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Updating instance_info_cache with network_info: [{"id": "1ca40fc4-7826-4815-a0f0-7b7650b2569c", "address": "fa:16:3e:8e:d8:bd", "network": {"id": "88d5f9d7-997a-4f2b-b635-2e7f48a3b027", "bridge": "br-int", "label": "tempest-network-smoke--79469787", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ca40fc4-78", "ovs_interfaceid": "1ca40fc4-7826-4815-a0f0-7b7650b2569c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 05:23:54 np0005540825 nova_compute[256151]: 2025-12-01 10:23:54.669 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Releasing lock "refresh_cache-a0d2df94-256c-4d12-b661-60feb351cd23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 05:23:54 np0005540825 nova_compute[256151]: 2025-12-01 10:23:54.669 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 05:23:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  1 05:23:55 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:23:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  1 05:23:55 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:23:55 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:23:55 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:23:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:23:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  1 05:23:55 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:23:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  1 05:23:55 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:23:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec  1 05:23:55 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  1 05:23:55 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v999: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Dec  1 05:23:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:23:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:23:56.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:23:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:23:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:23:56.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:23:56 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec  1 05:23:56 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  1 05:23:56 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:23:56 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:23:56 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 05:23:56 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:23:56 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1000: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 168 KiB/s rd, 7 op/s
Dec  1 05:23:56 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 05:23:56 np0005540825 ceph-mon[74416]: log_channel(cluster) log [WRN] : Health check failed: 2 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Dec  1 05:23:56 np0005540825 nova_compute[256151]: 2025-12-01 10:23:56.760 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:56 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:23:56 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 05:23:56 np0005540825 nova_compute[256151]: 2025-12-01 10:23:56.837 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:23:56 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:23:56 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:23:56 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  1 05:23:56 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  1 05:23:56 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:23:57 np0005540825 nova_compute[256151]: 2025-12-01 10:23:57.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:23:57 np0005540825 nova_compute[256151]: 2025-12-01 10:23:57.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:23:57 np0005540825 nova_compute[256151]: 2025-12-01 10:23:57.028 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:23:57 np0005540825 nova_compute[256151]: 2025-12-01 10:23:57.028 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:23:57 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:23:57 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 05:23:57 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 05:23:57 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 05:23:57 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:23:57 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:23:57 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:23:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:23:57.253Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:23:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:23:57.253Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:23:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:23:57.253Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:23:57 np0005540825 ovn_controller[153404]: 2025-12-01T10:23:57Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8e:d8:bd 10.100.0.13
Dec  1 05:23:57 np0005540825 ovn_controller[153404]: 2025-12-01T10:23:57Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8e:d8:bd 10.100.0.13
Dec  1 05:23:57 np0005540825 podman[275342]: 2025-12-01 10:23:57.609354498 +0000 UTC m=+0.025867236 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:23:57 np0005540825 podman[275342]: 2025-12-01 10:23:57.877299548 +0000 UTC m=+0.293812236 container create b6fc1bacaa87e017a8997a72cb082f2d5eaf85e8768805fd3ebf34a39e339f4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_chebyshev, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  1 05:23:58 np0005540825 nova_compute[256151]: 2025-12-01 10:23:58.023 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:23:58 np0005540825 ceph-mon[74416]: Health check failed: 2 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Dec  1 05:23:58 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:23:58 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:23:58 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:23:58 np0005540825 systemd[1]: Started libpod-conmon-b6fc1bacaa87e017a8997a72cb082f2d5eaf85e8768805fd3ebf34a39e339f4d.scope.
Dec  1 05:23:58 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:23:58 np0005540825 podman[275342]: 2025-12-01 10:23:58.271127369 +0000 UTC m=+0.687640067 container init b6fc1bacaa87e017a8997a72cb082f2d5eaf85e8768805fd3ebf34a39e339f4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_chebyshev, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:23:58 np0005540825 podman[275342]: 2025-12-01 10:23:58.278581429 +0000 UTC m=+0.695094107 container start b6fc1bacaa87e017a8997a72cb082f2d5eaf85e8768805fd3ebf34a39e339f4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_chebyshev, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:23:58 np0005540825 zealous_chebyshev[275360]: 167 167
Dec  1 05:23:58 np0005540825 systemd[1]: libpod-b6fc1bacaa87e017a8997a72cb082f2d5eaf85e8768805fd3ebf34a39e339f4d.scope: Deactivated successfully.
Dec  1 05:23:58 np0005540825 podman[275342]: 2025-12-01 10:23:58.288423494 +0000 UTC m=+0.704936202 container attach b6fc1bacaa87e017a8997a72cb082f2d5eaf85e8768805fd3ebf34a39e339f4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:23:58 np0005540825 podman[275342]: 2025-12-01 10:23:58.288887406 +0000 UTC m=+0.705400084 container died b6fc1bacaa87e017a8997a72cb082f2d5eaf85e8768805fd3ebf34a39e339f4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_chebyshev, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:23:58 np0005540825 systemd[1]: var-lib-containers-storage-overlay-f0fd1f5ab79813ab1d7d5dacdfd0d53331dd93e6249261765a8533ae5e6ef781-merged.mount: Deactivated successfully.
Dec  1 05:23:58 np0005540825 podman[275342]: 2025-12-01 10:23:58.36084919 +0000 UTC m=+0.777361868 container remove b6fc1bacaa87e017a8997a72cb082f2d5eaf85e8768805fd3ebf34a39e339f4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_chebyshev, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  1 05:23:58 np0005540825 systemd[1]: libpod-conmon-b6fc1bacaa87e017a8997a72cb082f2d5eaf85e8768805fd3ebf34a39e339f4d.scope: Deactivated successfully.
Dec  1 05:23:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:23:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:23:58.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:23:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:23:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:23:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:23:58.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:23:58 np0005540825 podman[275384]: 2025-12-01 10:23:58.547834564 +0000 UTC m=+0.048967256 container create 2c69da36f95799613c2eb75c24db410a2ec1ec0f7be5117eab14ecb025a36208 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_shirley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec  1 05:23:58 np0005540825 systemd[1]: Started libpod-conmon-2c69da36f95799613c2eb75c24db410a2ec1ec0f7be5117eab14ecb025a36208.scope.
Dec  1 05:23:58 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:23:58 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5673fb1ad63274054ce84058db6eb2d2f82a3289806dba7f1ff73d7b231d2a9a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:23:58 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5673fb1ad63274054ce84058db6eb2d2f82a3289806dba7f1ff73d7b231d2a9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:23:58 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5673fb1ad63274054ce84058db6eb2d2f82a3289806dba7f1ff73d7b231d2a9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:23:58 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5673fb1ad63274054ce84058db6eb2d2f82a3289806dba7f1ff73d7b231d2a9a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:23:58 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5673fb1ad63274054ce84058db6eb2d2f82a3289806dba7f1ff73d7b231d2a9a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:23:58 np0005540825 podman[275384]: 2025-12-01 10:23:58.522101803 +0000 UTC m=+0.023234525 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:23:58 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1001: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 168 KiB/s rd, 7 op/s
Dec  1 05:23:58 np0005540825 podman[275384]: 2025-12-01 10:23:58.665892167 +0000 UTC m=+0.167024899 container init 2c69da36f95799613c2eb75c24db410a2ec1ec0f7be5117eab14ecb025a36208 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_shirley, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:23:58 np0005540825 podman[275384]: 2025-12-01 10:23:58.674822777 +0000 UTC m=+0.175955469 container start 2c69da36f95799613c2eb75c24db410a2ec1ec0f7be5117eab14ecb025a36208 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  1 05:23:58 np0005540825 podman[275384]: 2025-12-01 10:23:58.685242457 +0000 UTC m=+0.186375199 container attach 2c69da36f95799613c2eb75c24db410a2ec1ec0f7be5117eab14ecb025a36208 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  1 05:23:59 np0005540825 heuristic_shirley[275400]: --> passed data devices: 0 physical, 1 LVM
Dec  1 05:23:59 np0005540825 heuristic_shirley[275400]: --> All data devices are unavailable
Dec  1 05:23:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:23:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:59 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:23:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:59 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:23:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:23:59 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:23:59 np0005540825 podman[275384]: 2025-12-01 10:23:59.027484783 +0000 UTC m=+0.528617485 container died 2c69da36f95799613c2eb75c24db410a2ec1ec0f7be5117eab14ecb025a36208 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_shirley, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:23:59 np0005540825 systemd[1]: libpod-2c69da36f95799613c2eb75c24db410a2ec1ec0f7be5117eab14ecb025a36208.scope: Deactivated successfully.
Dec  1 05:23:59 np0005540825 systemd[1]: var-lib-containers-storage-overlay-5673fb1ad63274054ce84058db6eb2d2f82a3289806dba7f1ff73d7b231d2a9a-merged.mount: Deactivated successfully.
Dec  1 05:23:59 np0005540825 podman[275384]: 2025-12-01 10:23:59.115114368 +0000 UTC m=+0.616247080 container remove 2c69da36f95799613c2eb75c24db410a2ec1ec0f7be5117eab14ecb025a36208 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_shirley, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:23:59 np0005540825 systemd[1]: libpod-conmon-2c69da36f95799613c2eb75c24db410a2ec1ec0f7be5117eab14ecb025a36208.scope: Deactivated successfully.
Dec  1 05:23:59 np0005540825 podman[275523]: 2025-12-01 10:23:59.745097376 +0000 UTC m=+0.041702052 container create 26c2ee8580a5fdc9a5bd1a76f2964db773e49456cf542cf8fae0cbcdcea705c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_golick, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:23:59 np0005540825 systemd[1]: Started libpod-conmon-26c2ee8580a5fdc9a5bd1a76f2964db773e49456cf542cf8fae0cbcdcea705c3.scope.
Dec  1 05:23:59 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:23:59 np0005540825 podman[275523]: 2025-12-01 10:23:59.727069451 +0000 UTC m=+0.023674147 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:23:59 np0005540825 podman[275523]: 2025-12-01 10:23:59.832996788 +0000 UTC m=+0.129601474 container init 26c2ee8580a5fdc9a5bd1a76f2964db773e49456cf542cf8fae0cbcdcea705c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_golick, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  1 05:23:59 np0005540825 podman[275523]: 2025-12-01 10:23:59.839654147 +0000 UTC m=+0.136258813 container start 26c2ee8580a5fdc9a5bd1a76f2964db773e49456cf542cf8fae0cbcdcea705c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_golick, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  1 05:23:59 np0005540825 podman[275523]: 2025-12-01 10:23:59.844145387 +0000 UTC m=+0.140750073 container attach 26c2ee8580a5fdc9a5bd1a76f2964db773e49456cf542cf8fae0cbcdcea705c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:23:59 np0005540825 happy_golick[275540]: 167 167
Dec  1 05:23:59 np0005540825 systemd[1]: libpod-26c2ee8580a5fdc9a5bd1a76f2964db773e49456cf542cf8fae0cbcdcea705c3.scope: Deactivated successfully.
Dec  1 05:23:59 np0005540825 podman[275545]: 2025-12-01 10:23:59.882071257 +0000 UTC m=+0.021497179 container died 26c2ee8580a5fdc9a5bd1a76f2964db773e49456cf542cf8fae0cbcdcea705c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_golick, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec  1 05:23:59 np0005540825 systemd[1]: var-lib-containers-storage-overlay-0ce865c60583127650b0da02e59edec0875d04f56567968425e6c2f214034a9a-merged.mount: Deactivated successfully.
Dec  1 05:23:59 np0005540825 podman[275545]: 2025-12-01 10:23:59.985520166 +0000 UTC m=+0.124946078 container remove 26c2ee8580a5fdc9a5bd1a76f2964db773e49456cf542cf8fae0cbcdcea705c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_golick, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:23:59 np0005540825 systemd[1]: libpod-conmon-26c2ee8580a5fdc9a5bd1a76f2964db773e49456cf542cf8fae0cbcdcea705c3.scope: Deactivated successfully.
Dec  1 05:24:00 np0005540825 podman[275567]: 2025-12-01 10:24:00.210704017 +0000 UTC m=+0.057867146 container create 2631071a1eec225c18114b00f9e3f2cfc36c51b4b95824050ab3bff4c29e8a0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_lovelace, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:24:00 np0005540825 systemd[1]: Started libpod-conmon-2631071a1eec225c18114b00f9e3f2cfc36c51b4b95824050ab3bff4c29e8a0c.scope.
Dec  1 05:24:00 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:24:00 np0005540825 podman[275567]: 2025-12-01 10:24:00.184544384 +0000 UTC m=+0.031707573 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:24:00 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e844c878a8db04fe08cf72379922f9e6eb608a728845f0d6478d7f1137f78b12/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:24:00 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e844c878a8db04fe08cf72379922f9e6eb608a728845f0d6478d7f1137f78b12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:24:00 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e844c878a8db04fe08cf72379922f9e6eb608a728845f0d6478d7f1137f78b12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:24:00 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e844c878a8db04fe08cf72379922f9e6eb608a728845f0d6478d7f1137f78b12/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:24:00 np0005540825 podman[275567]: 2025-12-01 10:24:00.289131364 +0000 UTC m=+0.136294483 container init 2631071a1eec225c18114b00f9e3f2cfc36c51b4b95824050ab3bff4c29e8a0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_lovelace, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  1 05:24:00 np0005540825 podman[275567]: 2025-12-01 10:24:00.299810641 +0000 UTC m=+0.146973740 container start 2631071a1eec225c18114b00f9e3f2cfc36c51b4b95824050ab3bff4c29e8a0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_lovelace, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  1 05:24:00 np0005540825 podman[275567]: 2025-12-01 10:24:00.303336846 +0000 UTC m=+0.150500035 container attach 2631071a1eec225c18114b00f9e3f2cfc36c51b4b95824050ab3bff4c29e8a0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_lovelace, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:24:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:24:00.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:24:00.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:24:00 np0005540825 upbeat_lovelace[275584]: {
Dec  1 05:24:00 np0005540825 upbeat_lovelace[275584]:    "1": [
Dec  1 05:24:00 np0005540825 upbeat_lovelace[275584]:        {
Dec  1 05:24:00 np0005540825 upbeat_lovelace[275584]:            "devices": [
Dec  1 05:24:00 np0005540825 upbeat_lovelace[275584]:                "/dev/loop3"
Dec  1 05:24:00 np0005540825 upbeat_lovelace[275584]:            ],
Dec  1 05:24:00 np0005540825 upbeat_lovelace[275584]:            "lv_name": "ceph_lv0",
Dec  1 05:24:00 np0005540825 upbeat_lovelace[275584]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:24:00 np0005540825 upbeat_lovelace[275584]:            "lv_size": "21470642176",
Dec  1 05:24:00 np0005540825 upbeat_lovelace[275584]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 05:24:00 np0005540825 upbeat_lovelace[275584]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:24:00 np0005540825 upbeat_lovelace[275584]:            "name": "ceph_lv0",
Dec  1 05:24:00 np0005540825 upbeat_lovelace[275584]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:24:00 np0005540825 upbeat_lovelace[275584]:            "tags": {
Dec  1 05:24:00 np0005540825 upbeat_lovelace[275584]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:24:00 np0005540825 upbeat_lovelace[275584]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:24:00 np0005540825 upbeat_lovelace[275584]:                "ceph.cephx_lockbox_secret": "",
Dec  1 05:24:00 np0005540825 upbeat_lovelace[275584]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 05:24:00 np0005540825 upbeat_lovelace[275584]:                "ceph.cluster_name": "ceph",
Dec  1 05:24:00 np0005540825 upbeat_lovelace[275584]:                "ceph.crush_device_class": "",
Dec  1 05:24:00 np0005540825 upbeat_lovelace[275584]:                "ceph.encrypted": "0",
Dec  1 05:24:00 np0005540825 upbeat_lovelace[275584]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 05:24:00 np0005540825 upbeat_lovelace[275584]:                "ceph.osd_id": "1",
Dec  1 05:24:00 np0005540825 upbeat_lovelace[275584]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 05:24:00 np0005540825 upbeat_lovelace[275584]:                "ceph.type": "block",
Dec  1 05:24:00 np0005540825 upbeat_lovelace[275584]:                "ceph.vdo": "0",
Dec  1 05:24:00 np0005540825 upbeat_lovelace[275584]:                "ceph.with_tpm": "0"
Dec  1 05:24:00 np0005540825 upbeat_lovelace[275584]:            },
Dec  1 05:24:00 np0005540825 upbeat_lovelace[275584]:            "type": "block",
Dec  1 05:24:00 np0005540825 upbeat_lovelace[275584]:            "vg_name": "ceph_vg0"
Dec  1 05:24:00 np0005540825 upbeat_lovelace[275584]:        }
Dec  1 05:24:00 np0005540825 upbeat_lovelace[275584]:    ]
Dec  1 05:24:00 np0005540825 upbeat_lovelace[275584]: }
Dec  1 05:24:00 np0005540825 systemd[1]: libpod-2631071a1eec225c18114b00f9e3f2cfc36c51b4b95824050ab3bff4c29e8a0c.scope: Deactivated successfully.
Dec  1 05:24:00 np0005540825 podman[275567]: 2025-12-01 10:24:00.643560908 +0000 UTC m=+0.490724027 container died 2631071a1eec225c18114b00f9e3f2cfc36c51b4b95824050ab3bff4c29e8a0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_lovelace, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  1 05:24:00 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1002: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 536 KiB/s rd, 2.4 MiB/s wr, 77 op/s
Dec  1 05:24:00 np0005540825 systemd[1]: var-lib-containers-storage-overlay-e844c878a8db04fe08cf72379922f9e6eb608a728845f0d6478d7f1137f78b12-merged.mount: Deactivated successfully.
Dec  1 05:24:00 np0005540825 podman[275567]: 2025-12-01 10:24:00.852096812 +0000 UTC m=+0.699259901 container remove 2631071a1eec225c18114b00f9e3f2cfc36c51b4b95824050ab3bff4c29e8a0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_lovelace, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:24:00 np0005540825 systemd[1]: libpod-conmon-2631071a1eec225c18114b00f9e3f2cfc36c51b4b95824050ab3bff4c29e8a0c.scope: Deactivated successfully.
Dec  1 05:24:01 np0005540825 nova_compute[256151]: 2025-12-01 10:24:01.026 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:24:01 np0005540825 nova_compute[256151]: 2025-12-01 10:24:01.028 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:24:01 np0005540825 nova_compute[256151]: 2025-12-01 10:24:01.028 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 05:24:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:24:01] "GET /metrics HTTP/1.1" 200 48562 "" "Prometheus/2.51.0"
Dec  1 05:24:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:24:01] "GET /metrics HTTP/1.1" 200 48562 "" "Prometheus/2.51.0"
Dec  1 05:24:01 np0005540825 podman[275697]: 2025-12-01 10:24:01.558286198 +0000 UTC m=+0.046769048 container create 60ea9d0aca4c2d7e3d52d6008988eca8a05fb82546c9af553e38f7d111201992 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  1 05:24:01 np0005540825 systemd[1]: Started libpod-conmon-60ea9d0aca4c2d7e3d52d6008988eca8a05fb82546c9af553e38f7d111201992.scope.
Dec  1 05:24:01 np0005540825 podman[275697]: 2025-12-01 10:24:01.538036023 +0000 UTC m=+0.026518893 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:24:01 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:24:01 np0005540825 podman[275697]: 2025-12-01 10:24:01.654012829 +0000 UTC m=+0.142495689 container init 60ea9d0aca4c2d7e3d52d6008988eca8a05fb82546c9af553e38f7d111201992 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_wilson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS)
Dec  1 05:24:01 np0005540825 podman[275697]: 2025-12-01 10:24:01.663031781 +0000 UTC m=+0.151514631 container start 60ea9d0aca4c2d7e3d52d6008988eca8a05fb82546c9af553e38f7d111201992 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_wilson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  1 05:24:01 np0005540825 podman[275697]: 2025-12-01 10:24:01.666296039 +0000 UTC m=+0.154778899 container attach 60ea9d0aca4c2d7e3d52d6008988eca8a05fb82546c9af553e38f7d111201992 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  1 05:24:01 np0005540825 eloquent_wilson[275714]: 167 167
Dec  1 05:24:01 np0005540825 systemd[1]: libpod-60ea9d0aca4c2d7e3d52d6008988eca8a05fb82546c9af553e38f7d111201992.scope: Deactivated successfully.
Dec  1 05:24:01 np0005540825 podman[275697]: 2025-12-01 10:24:01.670590674 +0000 UTC m=+0.159073524 container died 60ea9d0aca4c2d7e3d52d6008988eca8a05fb82546c9af553e38f7d111201992 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  1 05:24:01 np0005540825 systemd[1]: var-lib-containers-storage-overlay-786ea01a2d3f6a4fd539f33df9e094a1693f93bfb93608630509b4240f913ad4-merged.mount: Deactivated successfully.
Dec  1 05:24:01 np0005540825 podman[275697]: 2025-12-01 10:24:01.708612306 +0000 UTC m=+0.197095146 container remove 60ea9d0aca4c2d7e3d52d6008988eca8a05fb82546c9af553e38f7d111201992 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_wilson, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  1 05:24:01 np0005540825 systemd[1]: libpod-conmon-60ea9d0aca4c2d7e3d52d6008988eca8a05fb82546c9af553e38f7d111201992.scope: Deactivated successfully.
Dec  1 05:24:01 np0005540825 nova_compute[256151]: 2025-12-01 10:24:01.763 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:01 np0005540825 nova_compute[256151]: 2025-12-01 10:24:01.840 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:01 np0005540825 podman[275742]: 2025-12-01 10:24:01.935459882 +0000 UTC m=+0.058816522 container create 08fed6801e48afde84890b7a383722529adb8eabc676c999a2d82a62f8051473 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_cerf, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:24:01 np0005540825 systemd[1]: Started libpod-conmon-08fed6801e48afde84890b7a383722529adb8eabc676c999a2d82a62f8051473.scope.
Dec  1 05:24:02 np0005540825 podman[275742]: 2025-12-01 10:24:01.908111527 +0000 UTC m=+0.031468217 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:24:02 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:24:02 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87a2c36001f012e1d0448ab7ca618de01106f8d07d83317f50e267eebc3ae5b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:24:02 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87a2c36001f012e1d0448ab7ca618de01106f8d07d83317f50e267eebc3ae5b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:24:02 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87a2c36001f012e1d0448ab7ca618de01106f8d07d83317f50e267eebc3ae5b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:24:02 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87a2c36001f012e1d0448ab7ca618de01106f8d07d83317f50e267eebc3ae5b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:24:02 np0005540825 nova_compute[256151]: 2025-12-01 10:24:02.026 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:24:02 np0005540825 podman[275742]: 2025-12-01 10:24:02.044117231 +0000 UTC m=+0.167473901 container init 08fed6801e48afde84890b7a383722529adb8eabc676c999a2d82a62f8051473 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Dec  1 05:24:02 np0005540825 nova_compute[256151]: 2025-12-01 10:24:02.048 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:24:02 np0005540825 nova_compute[256151]: 2025-12-01 10:24:02.049 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:24:02 np0005540825 nova_compute[256151]: 2025-12-01 10:24:02.049 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:24:02 np0005540825 nova_compute[256151]: 2025-12-01 10:24:02.049 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 05:24:02 np0005540825 nova_compute[256151]: 2025-12-01 10:24:02.050 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:24:02 np0005540825 podman[275742]: 2025-12-01 10:24:02.053430142 +0000 UTC m=+0.176786782 container start 08fed6801e48afde84890b7a383722529adb8eabc676c999a2d82a62f8051473 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  1 05:24:02 np0005540825 podman[275742]: 2025-12-01 10:24:02.057613934 +0000 UTC m=+0.180970574 container attach 08fed6801e48afde84890b7a383722529adb8eabc676c999a2d82a62f8051473 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_cerf, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:24:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:24:02.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:24:02.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:02 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:24:02 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/297473408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:24:02 np0005540825 nova_compute[256151]: 2025-12-01 10:24:02.535 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:24:02 np0005540825 nova_compute[256151]: 2025-12-01 10:24:02.606 256155 DEBUG nova.virt.libvirt.driver [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  1 05:24:02 np0005540825 nova_compute[256151]: 2025-12-01 10:24:02.606 256155 DEBUG nova.virt.libvirt.driver [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  1 05:24:02 np0005540825 podman[275848]: 2025-12-01 10:24:02.639185491 +0000 UTC m=+0.062455859 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  1 05:24:02 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1003: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 377 KiB/s rd, 2.4 MiB/s wr, 71 op/s
Dec  1 05:24:02 np0005540825 lvm[275875]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 05:24:02 np0005540825 lvm[275875]: VG ceph_vg0 finished
Dec  1 05:24:02 np0005540825 romantic_cerf[275758]: {}
Dec  1 05:24:02 np0005540825 systemd[1]: libpod-08fed6801e48afde84890b7a383722529adb8eabc676c999a2d82a62f8051473.scope: Deactivated successfully.
Dec  1 05:24:02 np0005540825 systemd[1]: libpod-08fed6801e48afde84890b7a383722529adb8eabc676c999a2d82a62f8051473.scope: Consumed 1.110s CPU time.
Dec  1 05:24:02 np0005540825 podman[275742]: 2025-12-01 10:24:02.745640732 +0000 UTC m=+0.868997382 container died 08fed6801e48afde84890b7a383722529adb8eabc676c999a2d82a62f8051473 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_cerf, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:24:02 np0005540825 systemd[1]: var-lib-containers-storage-overlay-87a2c36001f012e1d0448ab7ca618de01106f8d07d83317f50e267eebc3ae5b1-merged.mount: Deactivated successfully.
Dec  1 05:24:02 np0005540825 nova_compute[256151]: 2025-12-01 10:24:02.791 256155 WARNING nova.virt.libvirt.driver [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 05:24:02 np0005540825 nova_compute[256151]: 2025-12-01 10:24:02.792 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4323MB free_disk=59.9428825378418GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 05:24:02 np0005540825 nova_compute[256151]: 2025-12-01 10:24:02.792 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:24:02 np0005540825 nova_compute[256151]: 2025-12-01 10:24:02.792 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:24:02 np0005540825 podman[275742]: 2025-12-01 10:24:02.808630585 +0000 UTC m=+0.931987225 container remove 08fed6801e48afde84890b7a383722529adb8eabc676c999a2d82a62f8051473 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_cerf, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  1 05:24:02 np0005540825 systemd[1]: libpod-conmon-08fed6801e48afde84890b7a383722529adb8eabc676c999a2d82a62f8051473.scope: Deactivated successfully.
Dec  1 05:24:02 np0005540825 nova_compute[256151]: 2025-12-01 10:24:02.864 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Instance a0d2df94-256c-4d12-b661-60feb351cd23 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 05:24:02 np0005540825 nova_compute[256151]: 2025-12-01 10:24:02.864 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 05:24:02 np0005540825 nova_compute[256151]: 2025-12-01 10:24:02.864 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 05:24:02 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 05:24:02 np0005540825 nova_compute[256151]: 2025-12-01 10:24:02.915 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:24:03 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:24:03 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 05:24:03 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:24:03 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:24:03 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3583663575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:24:03 np0005540825 nova_compute[256151]: 2025-12-01 10:24:03.378 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:24:03 np0005540825 nova_compute[256151]: 2025-12-01 10:24:03.385 256155 DEBUG nova.compute.provider_tree [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed in ProviderTree for provider: 5efe20fe-1981-4bd9-8786-d9fddc89a5ae update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 05:24:03 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:24:03 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:24:03 np0005540825 nova_compute[256151]: 2025-12-01 10:24:03.405 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 05:24:03 np0005540825 nova_compute[256151]: 2025-12-01 10:24:03.428 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 05:24:03 np0005540825 nova_compute[256151]: 2025-12-01 10:24:03.429 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:24:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:24:03.698Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:24:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:24:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:24:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:24:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:04 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:24:04 np0005540825 nova_compute[256151]: 2025-12-01 10:24:04.209 256155 INFO nova.compute.manager [None req-0a9c8066-f738-4a7e-b509-e763157ca389 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Get console output#033[00m
Dec  1 05:24:04 np0005540825 nova_compute[256151]: 2025-12-01 10:24:04.216 262942 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  1 05:24:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:24:04.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:24:04.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:04.581 163291 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:24:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:04.582 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:24:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:04.583 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:24:04 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1004: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 377 KiB/s rd, 2.4 MiB/s wr, 71 op/s
Dec  1 05:24:05 np0005540825 ovn_controller[153404]: 2025-12-01T10:24:05Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8e:d8:bd 10.100.0.13
Dec  1 05:24:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:24:06 np0005540825 podman[275967]: 2025-12-01 10:24:06.411220839 +0000 UTC m=+0.110782498 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 05:24:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:24:06.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:24:06.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:06 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1005: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 370 KiB/s rd, 2.4 MiB/s wr, 70 op/s
Dec  1 05:24:06 np0005540825 nova_compute[256151]: 2025-12-01 10:24:06.767 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:06 np0005540825 nova_compute[256151]: 2025-12-01 10:24:06.843 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:07 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  1 05:24:07 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3822714645' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  1 05:24:07 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  1 05:24:07 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3822714645' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  1 05:24:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:24:07.255Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:24:08 np0005540825 nova_compute[256151]: 2025-12-01 10:24:08.432 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:24:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:24:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:24:08.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:24:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:24:08.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:08 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1006: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec  1 05:24:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:24:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:24:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:24:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:09 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:24:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Dec  1 05:24:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:24:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:24:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:24:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:24:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:24:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:24:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:24:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:24:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:24:10 np0005540825 ovn_controller[153404]: 2025-12-01T10:24:10Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8e:d8:bd 10.100.0.13
Dec  1 05:24:10 np0005540825 nova_compute[256151]: 2025-12-01 10:24:10.105 256155 DEBUG nova.compute.manager [req-35f3724c-479f-4fd1-9696-dcfe35368ab3 req-0f5ae817-6f78-42ea-97b8-c6ac841d33a5 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Received event network-changed-1ca40fc4-7826-4815-a0f0-7b7650b2569c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:24:10 np0005540825 nova_compute[256151]: 2025-12-01 10:24:10.106 256155 DEBUG nova.compute.manager [req-35f3724c-479f-4fd1-9696-dcfe35368ab3 req-0f5ae817-6f78-42ea-97b8-c6ac841d33a5 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Refreshing instance network info cache due to event network-changed-1ca40fc4-7826-4815-a0f0-7b7650b2569c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 05:24:10 np0005540825 nova_compute[256151]: 2025-12-01 10:24:10.106 256155 DEBUG oslo_concurrency.lockutils [req-35f3724c-479f-4fd1-9696-dcfe35368ab3 req-0f5ae817-6f78-42ea-97b8-c6ac841d33a5 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "refresh_cache-a0d2df94-256c-4d12-b661-60feb351cd23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 05:24:10 np0005540825 nova_compute[256151]: 2025-12-01 10:24:10.106 256155 DEBUG oslo_concurrency.lockutils [req-35f3724c-479f-4fd1-9696-dcfe35368ab3 req-0f5ae817-6f78-42ea-97b8-c6ac841d33a5 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquired lock "refresh_cache-a0d2df94-256c-4d12-b661-60feb351cd23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 05:24:10 np0005540825 nova_compute[256151]: 2025-12-01 10:24:10.107 256155 DEBUG nova.network.neutron [req-35f3724c-479f-4fd1-9696-dcfe35368ab3 req-0f5ae817-6f78-42ea-97b8-c6ac841d33a5 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Refreshing network info cache for port 1ca40fc4-7826-4815-a0f0-7b7650b2569c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 05:24:10 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:10.172 163291 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '36:10:da', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '4e:5c:35:98:90:37'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 05:24:10 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:10.173 163291 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 05:24:10 np0005540825 nova_compute[256151]: 2025-12-01 10:24:10.198 256155 DEBUG oslo_concurrency.lockutils [None req-ecdb000d-932b-4723-a0ca-68fc8c24db0f 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "a0d2df94-256c-4d12-b661-60feb351cd23" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:24:10 np0005540825 nova_compute[256151]: 2025-12-01 10:24:10.199 256155 DEBUG oslo_concurrency.lockutils [None req-ecdb000d-932b-4723-a0ca-68fc8c24db0f 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "a0d2df94-256c-4d12-b661-60feb351cd23" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:24:10 np0005540825 nova_compute[256151]: 2025-12-01 10:24:10.200 256155 DEBUG oslo_concurrency.lockutils [None req-ecdb000d-932b-4723-a0ca-68fc8c24db0f 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "a0d2df94-256c-4d12-b661-60feb351cd23-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:24:10 np0005540825 nova_compute[256151]: 2025-12-01 10:24:10.200 256155 DEBUG oslo_concurrency.lockutils [None req-ecdb000d-932b-4723-a0ca-68fc8c24db0f 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "a0d2df94-256c-4d12-b661-60feb351cd23-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:24:10 np0005540825 nova_compute[256151]: 2025-12-01 10:24:10.201 256155 DEBUG oslo_concurrency.lockutils [None req-ecdb000d-932b-4723-a0ca-68fc8c24db0f 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "a0d2df94-256c-4d12-b661-60feb351cd23-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:24:10 np0005540825 nova_compute[256151]: 2025-12-01 10:24:10.203 256155 INFO nova.compute.manager [None req-ecdb000d-932b-4723-a0ca-68fc8c24db0f 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Terminating instance#033[00m
Dec  1 05:24:10 np0005540825 nova_compute[256151]: 2025-12-01 10:24:10.205 256155 DEBUG nova.compute.manager [None req-ecdb000d-932b-4723-a0ca-68fc8c24db0f 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 05:24:10 np0005540825 nova_compute[256151]: 2025-12-01 10:24:10.206 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:10 np0005540825 kernel: tap1ca40fc4-78 (unregistering): left promiscuous mode
Dec  1 05:24:10 np0005540825 NetworkManager[48963]: <info>  [1764584650.2714] device (tap1ca40fc4-78): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 05:24:10 np0005540825 ovn_controller[153404]: 2025-12-01T10:24:10Z|00075|binding|INFO|Releasing lport 1ca40fc4-7826-4815-a0f0-7b7650b2569c from this chassis (sb_readonly=1)
Dec  1 05:24:10 np0005540825 nova_compute[256151]: 2025-12-01 10:24:10.281 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:10 np0005540825 ovn_controller[153404]: 2025-12-01T10:24:10Z|00076|binding|INFO|Removing iface tap1ca40fc4-78 ovn-installed in OVS
Dec  1 05:24:10 np0005540825 ovn_controller[153404]: 2025-12-01T10:24:10Z|00077|if_status|INFO|Not setting lport 1ca40fc4-7826-4815-a0f0-7b7650b2569c down as sb is readonly
Dec  1 05:24:10 np0005540825 nova_compute[256151]: 2025-12-01 10:24:10.285 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:10 np0005540825 ovn_controller[153404]: 2025-12-01T10:24:10Z|00078|binding|INFO|Setting lport 1ca40fc4-7826-4815-a0f0-7b7650b2569c down in Southbound
Dec  1 05:24:10 np0005540825 nova_compute[256151]: 2025-12-01 10:24:10.325 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:10 np0005540825 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Dec  1 05:24:10 np0005540825 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000a.scope: Consumed 14.168s CPU time.
Dec  1 05:24:10 np0005540825 systemd-machined[216307]: Machine qemu-5-instance-0000000a terminated.
Dec  1 05:24:10 np0005540825 nova_compute[256151]: 2025-12-01 10:24:10.433 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:24:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:24:10.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:24:10 np0005540825 nova_compute[256151]: 2025-12-01 10:24:10.442 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:10 np0005540825 nova_compute[256151]: 2025-12-01 10:24:10.452 256155 INFO nova.virt.libvirt.driver [-] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Instance destroyed successfully.#033[00m
Dec  1 05:24:10 np0005540825 nova_compute[256151]: 2025-12-01 10:24:10.453 256155 DEBUG nova.objects.instance [None req-ecdb000d-932b-4723-a0ca-68fc8c24db0f 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lazy-loading 'resources' on Instance uuid a0d2df94-256c-4d12-b661-60feb351cd23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 05:24:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:24:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:24:10.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:24:10 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:24:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:24:10 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1007: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec  1 05:24:11 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:11.008 163291 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:d8:bd 10.100.0.13'], port_security=['fa:16:3e:8e:d8:bd 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'a0d2df94-256c-4d12-b661-60feb351cd23', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88d5f9d7-997a-4f2b-b635-2e7f48a3b027', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f6be4e572624210b91193c011607c08', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3329977c-bebc-4580-be9d-02d5bf17e4f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b94d8cd4-086c-4f0e-aa55-7d70d05d5d6e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3429b436d0>], logical_port=1ca40fc4-7826-4815-a0f0-7b7650b2569c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3429b436d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 05:24:11 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:11.010 163291 INFO neutron.agent.ovn.metadata.agent [-] Port 1ca40fc4-7826-4815-a0f0-7b7650b2569c in datapath 88d5f9d7-997a-4f2b-b635-2e7f48a3b027 unbound from our chassis#033[00m
Dec  1 05:24:11 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:11.012 163291 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 88d5f9d7-997a-4f2b-b635-2e7f48a3b027, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  1 05:24:11 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:11.014 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[d69127b6-4942-4452-8479-b19325ed9dc3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:24:11 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:11.014 163291 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-88d5f9d7-997a-4f2b-b635-2e7f48a3b027 namespace which is not needed anymore#033[00m
Dec  1 05:24:11 np0005540825 nova_compute[256151]: 2025-12-01 10:24:11.022 256155 DEBUG nova.virt.libvirt.vif [None req-ecdb000d-932b-4723-a0ca-68fc8c24db0f 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T10:23:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-517852973',display_name='tempest-TestNetworkBasicOps-server-517852973',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-517852973',id=10,image_ref='8f75d6de-6ce0-44e1-b417-d0111424475b',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE8f2tMgtdY6uK/TEM/G824tb8XiUTe0AYFCR1sI4EKgZMxehjpRJioEJcBzRvIncR3SkpZWtPTHJ5NBzvJ8NwGHDK3YfhuNmYFLbCp53kUD0BOfGUJC8kaomMCPqNo9EA==',key_name='tempest-TestNetworkBasicOps-2021277470',keypairs=<?>,launch_index=0,launched_at=2025-12-01T10:23:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9f6be4e572624210b91193c011607c08',ramdisk_id='',reservation_id='r-xxy6holk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8f75d6de-6ce0-44e1-b417-d0111424475b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1248115384',owner_user_name='tempest-TestNetworkBasicOps-1248115384-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T10:23:44Z,user_data=None,user_id='5b56a238daf0445798410e51caada0ff',uuid=a0d2df94-256c-4d12-b661-60feb351cd23,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1ca40fc4-7826-4815-a0f0-7b7650b2569c", "address": "fa:16:3e:8e:d8:bd", "network": {"id": "88d5f9d7-997a-4f2b-b635-2e7f48a3b027", "bridge": "br-int", "label": "tempest-network-smoke--79469787", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ca40fc4-78", "ovs_interfaceid": "1ca40fc4-7826-4815-a0f0-7b7650b2569c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 05:24:11 np0005540825 nova_compute[256151]: 2025-12-01 10:24:11.023 256155 DEBUG nova.network.os_vif_util [None req-ecdb000d-932b-4723-a0ca-68fc8c24db0f 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Converting VIF {"id": "1ca40fc4-7826-4815-a0f0-7b7650b2569c", "address": "fa:16:3e:8e:d8:bd", "network": {"id": "88d5f9d7-997a-4f2b-b635-2e7f48a3b027", "bridge": "br-int", "label": "tempest-network-smoke--79469787", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ca40fc4-78", "ovs_interfaceid": "1ca40fc4-7826-4815-a0f0-7b7650b2569c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 05:24:11 np0005540825 nova_compute[256151]: 2025-12-01 10:24:11.024 256155 DEBUG nova.network.os_vif_util [None req-ecdb000d-932b-4723-a0ca-68fc8c24db0f 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8e:d8:bd,bridge_name='br-int',has_traffic_filtering=True,id=1ca40fc4-7826-4815-a0f0-7b7650b2569c,network=Network(88d5f9d7-997a-4f2b-b635-2e7f48a3b027),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ca40fc4-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 05:24:11 np0005540825 nova_compute[256151]: 2025-12-01 10:24:11.025 256155 DEBUG os_vif [None req-ecdb000d-932b-4723-a0ca-68fc8c24db0f 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8e:d8:bd,bridge_name='br-int',has_traffic_filtering=True,id=1ca40fc4-7826-4815-a0f0-7b7650b2569c,network=Network(88d5f9d7-997a-4f2b-b635-2e7f48a3b027),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ca40fc4-78') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 05:24:11 np0005540825 nova_compute[256151]: 2025-12-01 10:24:11.027 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:11 np0005540825 nova_compute[256151]: 2025-12-01 10:24:11.028 256155 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ca40fc4-78, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:24:11 np0005540825 nova_compute[256151]: 2025-12-01 10:24:11.030 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:11 np0005540825 nova_compute[256151]: 2025-12-01 10:24:11.031 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:11 np0005540825 nova_compute[256151]: 2025-12-01 10:24:11.035 256155 INFO os_vif [None req-ecdb000d-932b-4723-a0ca-68fc8c24db0f 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8e:d8:bd,bridge_name='br-int',has_traffic_filtering=True,id=1ca40fc4-7826-4815-a0f0-7b7650b2569c,network=Network(88d5f9d7-997a-4f2b-b635-2e7f48a3b027),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ca40fc4-78')#033[00m
Dec  1 05:24:11 np0005540825 neutron-haproxy-ovnmeta-88d5f9d7-997a-4f2b-b635-2e7f48a3b027[275091]: [NOTICE]   (275095) : haproxy version is 2.8.14-c23fe91
Dec  1 05:24:11 np0005540825 neutron-haproxy-ovnmeta-88d5f9d7-997a-4f2b-b635-2e7f48a3b027[275091]: [NOTICE]   (275095) : path to executable is /usr/sbin/haproxy
Dec  1 05:24:11 np0005540825 neutron-haproxy-ovnmeta-88d5f9d7-997a-4f2b-b635-2e7f48a3b027[275091]: [WARNING]  (275095) : Exiting Master process...
Dec  1 05:24:11 np0005540825 neutron-haproxy-ovnmeta-88d5f9d7-997a-4f2b-b635-2e7f48a3b027[275091]: [ALERT]    (275095) : Current worker (275097) exited with code 143 (Terminated)
Dec  1 05:24:11 np0005540825 neutron-haproxy-ovnmeta-88d5f9d7-997a-4f2b-b635-2e7f48a3b027[275091]: [WARNING]  (275095) : All workers exited. Exiting... (0)
Dec  1 05:24:11 np0005540825 systemd[1]: libpod-67de77b7598a576ed26027c3a01bc4edc696b0d73d8081f8e72634e1ff75ad4b.scope: Deactivated successfully.
Dec  1 05:24:11 np0005540825 podman[276046]: 2025-12-01 10:24:11.234083812 +0000 UTC m=+0.067557226 container died 67de77b7598a576ed26027c3a01bc4edc696b0d73d8081f8e72634e1ff75ad4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88d5f9d7-997a-4f2b-b635-2e7f48a3b027, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec  1 05:24:11 np0005540825 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-67de77b7598a576ed26027c3a01bc4edc696b0d73d8081f8e72634e1ff75ad4b-userdata-shm.mount: Deactivated successfully.
Dec  1 05:24:11 np0005540825 systemd[1]: var-lib-containers-storage-overlay-d4ae7b2f31508576a7e44ea65af091fe8735070f1fb2e3a9e220ac63a92f8c53-merged.mount: Deactivated successfully.
Dec  1 05:24:11 np0005540825 podman[276046]: 2025-12-01 10:24:11.287379334 +0000 UTC m=+0.120852718 container cleanup 67de77b7598a576ed26027c3a01bc4edc696b0d73d8081f8e72634e1ff75ad4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88d5f9d7-997a-4f2b-b635-2e7f48a3b027, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Dec  1 05:24:11 np0005540825 systemd[1]: libpod-conmon-67de77b7598a576ed26027c3a01bc4edc696b0d73d8081f8e72634e1ff75ad4b.scope: Deactivated successfully.
Dec  1 05:24:11 np0005540825 nova_compute[256151]: 2025-12-01 10:24:11.370 256155 DEBUG nova.compute.manager [req-7cec883c-ab25-4f06-afa4-3755a439398b req-e1252c18-fbab-4671-836e-e168a5443c09 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Received event network-vif-unplugged-1ca40fc4-7826-4815-a0f0-7b7650b2569c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:24:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:24:11] "GET /metrics HTTP/1.1" 200 48562 "" "Prometheus/2.51.0"
Dec  1 05:24:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:24:11] "GET /metrics HTTP/1.1" 200 48562 "" "Prometheus/2.51.0"
Dec  1 05:24:11 np0005540825 nova_compute[256151]: 2025-12-01 10:24:11.372 256155 DEBUG oslo_concurrency.lockutils [req-7cec883c-ab25-4f06-afa4-3755a439398b req-e1252c18-fbab-4671-836e-e168a5443c09 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "a0d2df94-256c-4d12-b661-60feb351cd23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:24:11 np0005540825 nova_compute[256151]: 2025-12-01 10:24:11.373 256155 DEBUG oslo_concurrency.lockutils [req-7cec883c-ab25-4f06-afa4-3755a439398b req-e1252c18-fbab-4671-836e-e168a5443c09 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "a0d2df94-256c-4d12-b661-60feb351cd23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:24:11 np0005540825 nova_compute[256151]: 2025-12-01 10:24:11.373 256155 DEBUG oslo_concurrency.lockutils [req-7cec883c-ab25-4f06-afa4-3755a439398b req-e1252c18-fbab-4671-836e-e168a5443c09 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "a0d2df94-256c-4d12-b661-60feb351cd23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:24:11 np0005540825 nova_compute[256151]: 2025-12-01 10:24:11.373 256155 DEBUG nova.compute.manager [req-7cec883c-ab25-4f06-afa4-3755a439398b req-e1252c18-fbab-4671-836e-e168a5443c09 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] No waiting events found dispatching network-vif-unplugged-1ca40fc4-7826-4815-a0f0-7b7650b2569c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 05:24:11 np0005540825 nova_compute[256151]: 2025-12-01 10:24:11.374 256155 DEBUG nova.compute.manager [req-7cec883c-ab25-4f06-afa4-3755a439398b req-e1252c18-fbab-4671-836e-e168a5443c09 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Received event network-vif-unplugged-1ca40fc4-7826-4815-a0f0-7b7650b2569c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  1 05:24:11 np0005540825 podman[276075]: 2025-12-01 10:24:11.377476576 +0000 UTC m=+0.059346896 container remove 67de77b7598a576ed26027c3a01bc4edc696b0d73d8081f8e72634e1ff75ad4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88d5f9d7-997a-4f2b-b635-2e7f48a3b027, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec  1 05:24:11 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:11.384 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[171446db-02f0-47ae-b13b-5880e674ffa4]: (4, ('Mon Dec  1 10:24:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-88d5f9d7-997a-4f2b-b635-2e7f48a3b027 (67de77b7598a576ed26027c3a01bc4edc696b0d73d8081f8e72634e1ff75ad4b)\n67de77b7598a576ed26027c3a01bc4edc696b0d73d8081f8e72634e1ff75ad4b\nMon Dec  1 10:24:11 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-88d5f9d7-997a-4f2b-b635-2e7f48a3b027 (67de77b7598a576ed26027c3a01bc4edc696b0d73d8081f8e72634e1ff75ad4b)\n67de77b7598a576ed26027c3a01bc4edc696b0d73d8081f8e72634e1ff75ad4b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:24:11 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:11.386 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[24e0c549-20cd-4892-bde1-818f90cc950e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:24:11 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:11.387 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88d5f9d7-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:24:11 np0005540825 nova_compute[256151]: 2025-12-01 10:24:11.445 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:11 np0005540825 kernel: tap88d5f9d7-90: left promiscuous mode
Dec  1 05:24:11 np0005540825 nova_compute[256151]: 2025-12-01 10:24:11.465 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:11 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:11.467 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[7ace0b41-8883-4b15-bd08-ae924e889f65]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:24:11 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:11.483 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[c4994a88-424e-41db-b845-a8b0a38eb605]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:24:11 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:11.484 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[4b5a1982-40f6-4905-b355-6e3dc6a4ccdd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:24:11 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:11.511 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[79836fbb-8ba4-4934-872b-a531e951c338]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449215, 'reachable_time': 38657, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276091, 'error': None, 'target': 'ovnmeta-88d5f9d7-997a-4f2b-b635-2e7f48a3b027', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:24:11 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:11.514 163408 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-88d5f9d7-997a-4f2b-b635-2e7f48a3b027 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  1 05:24:11 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:11.515 163408 DEBUG oslo.privsep.daemon [-] privsep: reply[e06806fb-50d0-439b-a84b-54f70fe8980e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:24:11 np0005540825 systemd[1]: run-netns-ovnmeta\x2d88d5f9d7\x2d997a\x2d4f2b\x2db635\x2d2e7f48a3b027.mount: Deactivated successfully.
Dec  1 05:24:11 np0005540825 nova_compute[256151]: 2025-12-01 10:24:11.590 256155 DEBUG nova.network.neutron [req-35f3724c-479f-4fd1-9696-dcfe35368ab3 req-0f5ae817-6f78-42ea-97b8-c6ac841d33a5 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Updated VIF entry in instance network info cache for port 1ca40fc4-7826-4815-a0f0-7b7650b2569c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 05:24:11 np0005540825 nova_compute[256151]: 2025-12-01 10:24:11.590 256155 DEBUG nova.network.neutron [req-35f3724c-479f-4fd1-9696-dcfe35368ab3 req-0f5ae817-6f78-42ea-97b8-c6ac841d33a5 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Updating instance_info_cache with network_info: [{"id": "1ca40fc4-7826-4815-a0f0-7b7650b2569c", "address": "fa:16:3e:8e:d8:bd", "network": {"id": "88d5f9d7-997a-4f2b-b635-2e7f48a3b027", "bridge": "br-int", "label": "tempest-network-smoke--79469787", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ca40fc4-78", "ovs_interfaceid": "1ca40fc4-7826-4815-a0f0-7b7650b2569c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 05:24:11 np0005540825 nova_compute[256151]: 2025-12-01 10:24:11.612 256155 DEBUG oslo_concurrency.lockutils [req-35f3724c-479f-4fd1-9696-dcfe35368ab3 req-0f5ae817-6f78-42ea-97b8-c6ac841d33a5 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Releasing lock "refresh_cache-a0d2df94-256c-4d12-b661-60feb351cd23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 05:24:11 np0005540825 nova_compute[256151]: 2025-12-01 10:24:11.848 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:12 np0005540825 nova_compute[256151]: 2025-12-01 10:24:12.250 256155 INFO nova.virt.libvirt.driver [None req-ecdb000d-932b-4723-a0ca-68fc8c24db0f 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Deleting instance files /var/lib/nova/instances/a0d2df94-256c-4d12-b661-60feb351cd23_del#033[00m
Dec  1 05:24:12 np0005540825 nova_compute[256151]: 2025-12-01 10:24:12.251 256155 INFO nova.virt.libvirt.driver [None req-ecdb000d-932b-4723-a0ca-68fc8c24db0f 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Deletion of /var/lib/nova/instances/a0d2df94-256c-4d12-b661-60feb351cd23_del complete#033[00m
Dec  1 05:24:12 np0005540825 nova_compute[256151]: 2025-12-01 10:24:12.315 256155 INFO nova.compute.manager [None req-ecdb000d-932b-4723-a0ca-68fc8c24db0f 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Took 2.11 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 05:24:12 np0005540825 nova_compute[256151]: 2025-12-01 10:24:12.316 256155 DEBUG oslo.service.loopingcall [None req-ecdb000d-932b-4723-a0ca-68fc8c24db0f 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 05:24:12 np0005540825 nova_compute[256151]: 2025-12-01 10:24:12.317 256155 DEBUG nova.compute.manager [-] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 05:24:12 np0005540825 nova_compute[256151]: 2025-12-01 10:24:12.317 256155 DEBUG nova.network.neutron [-] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 05:24:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:24:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:24:12.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:24:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:24:12.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:12 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1008: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 16 KiB/s wr, 1 op/s
Dec  1 05:24:13 np0005540825 nova_compute[256151]: 2025-12-01 10:24:13.328 256155 DEBUG nova.network.neutron [-] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 05:24:13 np0005540825 nova_compute[256151]: 2025-12-01 10:24:13.346 256155 INFO nova.compute.manager [-] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Took 1.03 seconds to deallocate network for instance.#033[00m
Dec  1 05:24:13 np0005540825 nova_compute[256151]: 2025-12-01 10:24:13.670 256155 DEBUG nova.compute.manager [req-3faa5613-d6b4-45f3-94ef-1e188ed5f3ca req-7864eea1-4945-49d5-9fb5-41c6b2e5dcba dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Received event network-vif-deleted-1ca40fc4-7826-4815-a0f0-7b7650b2569c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:24:13 np0005540825 nova_compute[256151]: 2025-12-01 10:24:13.672 256155 DEBUG nova.compute.manager [req-ccc7e0e9-af54-45cf-818b-b2d2a95e33b7 req-300588b1-a11a-4276-8030-f53e60958181 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Received event network-vif-plugged-1ca40fc4-7826-4815-a0f0-7b7650b2569c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:24:13 np0005540825 nova_compute[256151]: 2025-12-01 10:24:13.673 256155 DEBUG oslo_concurrency.lockutils [req-ccc7e0e9-af54-45cf-818b-b2d2a95e33b7 req-300588b1-a11a-4276-8030-f53e60958181 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "a0d2df94-256c-4d12-b661-60feb351cd23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:24:13 np0005540825 nova_compute[256151]: 2025-12-01 10:24:13.674 256155 DEBUG oslo_concurrency.lockutils [req-ccc7e0e9-af54-45cf-818b-b2d2a95e33b7 req-300588b1-a11a-4276-8030-f53e60958181 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "a0d2df94-256c-4d12-b661-60feb351cd23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:24:13 np0005540825 nova_compute[256151]: 2025-12-01 10:24:13.674 256155 DEBUG oslo_concurrency.lockutils [req-ccc7e0e9-af54-45cf-818b-b2d2a95e33b7 req-300588b1-a11a-4276-8030-f53e60958181 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "a0d2df94-256c-4d12-b661-60feb351cd23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:24:13 np0005540825 nova_compute[256151]: 2025-12-01 10:24:13.675 256155 DEBUG nova.compute.manager [req-ccc7e0e9-af54-45cf-818b-b2d2a95e33b7 req-300588b1-a11a-4276-8030-f53e60958181 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] No waiting events found dispatching network-vif-plugged-1ca40fc4-7826-4815-a0f0-7b7650b2569c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 05:24:13 np0005540825 nova_compute[256151]: 2025-12-01 10:24:13.675 256155 WARNING nova.compute.manager [req-ccc7e0e9-af54-45cf-818b-b2d2a95e33b7 req-300588b1-a11a-4276-8030-f53e60958181 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Received unexpected event network-vif-plugged-1ca40fc4-7826-4815-a0f0-7b7650b2569c for instance with vm_state active and task_state deleting.#033[00m
Dec  1 05:24:13 np0005540825 nova_compute[256151]: 2025-12-01 10:24:13.691 256155 DEBUG oslo_concurrency.lockutils [None req-ecdb000d-932b-4723-a0ca-68fc8c24db0f 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:24:13 np0005540825 nova_compute[256151]: 2025-12-01 10:24:13.692 256155 DEBUG oslo_concurrency.lockutils [None req-ecdb000d-932b-4723-a0ca-68fc8c24db0f 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:24:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:24:13.700Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:24:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:24:13.700Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:24:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:24:13.700Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:24:13 np0005540825 nova_compute[256151]: 2025-12-01 10:24:13.751 256155 DEBUG oslo_concurrency.processutils [None req-ecdb000d-932b-4723-a0ca-68fc8c24db0f 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:24:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:24:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:24:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:24:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:14 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:24:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:24:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2923719981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:24:14 np0005540825 nova_compute[256151]: 2025-12-01 10:24:14.249 256155 DEBUG oslo_concurrency.processutils [None req-ecdb000d-932b-4723-a0ca-68fc8c24db0f 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:24:14 np0005540825 nova_compute[256151]: 2025-12-01 10:24:14.256 256155 DEBUG nova.compute.provider_tree [None req-ecdb000d-932b-4723-a0ca-68fc8c24db0f 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Inventory has not changed in ProviderTree for provider: 5efe20fe-1981-4bd9-8786-d9fddc89a5ae update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 05:24:14 np0005540825 nova_compute[256151]: 2025-12-01 10:24:14.277 256155 DEBUG nova.scheduler.client.report [None req-ecdb000d-932b-4723-a0ca-68fc8c24db0f 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Inventory has not changed for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 05:24:14 np0005540825 nova_compute[256151]: 2025-12-01 10:24:14.301 256155 DEBUG oslo_concurrency.lockutils [None req-ecdb000d-932b-4723-a0ca-68fc8c24db0f 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:24:14 np0005540825 nova_compute[256151]: 2025-12-01 10:24:14.439 256155 INFO nova.scheduler.client.report [None req-ecdb000d-932b-4723-a0ca-68fc8c24db0f 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Deleted allocations for instance a0d2df94-256c-4d12-b661-60feb351cd23#033[00m
Dec  1 05:24:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:24:14.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:24:14.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:14 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1009: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 16 KiB/s wr, 1 op/s
Dec  1 05:24:14 np0005540825 nova_compute[256151]: 2025-12-01 10:24:14.668 256155 DEBUG oslo_concurrency.lockutils [None req-ecdb000d-932b-4723-a0ca-68fc8c24db0f 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "a0d2df94-256c-4d12-b661-60feb351cd23" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.469s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:24:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:24:16 np0005540825 nova_compute[256151]: 2025-12-01 10:24:16.032 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:16 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:16.175 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4d9738cf-2abf-48e2-9303-677669784912, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:24:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:24:16.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:24:16.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:16 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1010: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 17 KiB/s wr, 29 op/s
Dec  1 05:24:16 np0005540825 nova_compute[256151]: 2025-12-01 10:24:16.849 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:24:17.256Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:24:18 np0005540825 podman[276124]: 2025-12-01 10:24:18.241998358 +0000 UTC m=+0.105792984 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec  1 05:24:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:24:18.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:24:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:24:18.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:24:18 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1011: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 4.2 KiB/s wr, 28 op/s
Dec  1 05:24:18 np0005540825 nova_compute[256151]: 2025-12-01 10:24:18.725 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:18 np0005540825 nova_compute[256151]: 2025-12-01 10:24:18.843 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:24:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:24:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:19 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:24:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:19 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:24:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:24:20.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:24:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:24:20.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:24:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:24:20 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1012: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 4.2 KiB/s wr, 28 op/s
Dec  1 05:24:21 np0005540825 nova_compute[256151]: 2025-12-01 10:24:21.036 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:24:21] "GET /metrics HTTP/1.1" 200 48562 "" "Prometheus/2.51.0"
Dec  1 05:24:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:24:21] "GET /metrics HTTP/1.1" 200 48562 "" "Prometheus/2.51.0"
Dec  1 05:24:21 np0005540825 nova_compute[256151]: 2025-12-01 10:24:21.851 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:24:22.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:24:22.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:22 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1013: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec  1 05:24:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:24:23.701Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:24:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:24:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:24 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:24:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:24 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:24:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:24 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:24:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:24:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:24:24.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:24:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:24:24.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:24:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:24:24 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1014: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec  1 05:24:25 np0005540825 nova_compute[256151]: 2025-12-01 10:24:25.450 256155 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764584650.4494643, a0d2df94-256c-4d12-b661-60feb351cd23 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 05:24:25 np0005540825 nova_compute[256151]: 2025-12-01 10:24:25.451 256155 INFO nova.compute.manager [-] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] VM Stopped (Lifecycle Event)#033[00m
Dec  1 05:24:25 np0005540825 nova_compute[256151]: 2025-12-01 10:24:25.474 256155 DEBUG nova.compute.manager [None req-3ee66684-5ec7-4059-834e-b50cda760d40 - - - - - -] [instance: a0d2df94-256c-4d12-b661-60feb351cd23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 05:24:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:24:26 np0005540825 nova_compute[256151]: 2025-12-01 10:24:26.040 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:24:26.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:24:26.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:26 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1015: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec  1 05:24:26 np0005540825 nova_compute[256151]: 2025-12-01 10:24:26.855 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:24:27.257Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:24:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:24:28.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:24:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:24:28.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:24:28 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1016: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:24:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:24:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:24:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:24:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:29 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:24:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:24:30.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:24:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:24:30.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:24:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:24:30 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1017: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:24:31 np0005540825 nova_compute[256151]: 2025-12-01 10:24:31.042 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:24:31] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Dec  1 05:24:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:24:31] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Dec  1 05:24:31 np0005540825 nova_compute[256151]: 2025-12-01 10:24:31.858 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:24:32.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:24:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:24:32.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:24:32 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1018: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:24:33 np0005540825 podman[276193]: 2025-12-01 10:24:33.219614087 +0000 UTC m=+0.077857333 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec  1 05:24:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:24:33.703Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:24:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:24:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:24:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:24:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:34 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:24:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:24:34.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:24:34.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:34 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1019: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:24:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:24:36 np0005540825 nova_compute[256151]: 2025-12-01 10:24:36.045 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:36 np0005540825 nova_compute[256151]: 2025-12-01 10:24:36.170 256155 DEBUG oslo_concurrency.lockutils [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "dd56af67-ae91-4891-b152-ac9a0f325fc5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:24:36 np0005540825 nova_compute[256151]: 2025-12-01 10:24:36.170 256155 DEBUG oslo_concurrency.lockutils [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "dd56af67-ae91-4891-b152-ac9a0f325fc5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:24:36 np0005540825 nova_compute[256151]: 2025-12-01 10:24:36.194 256155 DEBUG nova.compute.manager [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 05:24:36 np0005540825 nova_compute[256151]: 2025-12-01 10:24:36.268 256155 DEBUG oslo_concurrency.lockutils [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:24:36 np0005540825 nova_compute[256151]: 2025-12-01 10:24:36.268 256155 DEBUG oslo_concurrency.lockutils [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:24:36 np0005540825 nova_compute[256151]: 2025-12-01 10:24:36.279 256155 DEBUG nova.virt.hardware [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 05:24:36 np0005540825 nova_compute[256151]: 2025-12-01 10:24:36.279 256155 INFO nova.compute.claims [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 05:24:36 np0005540825 nova_compute[256151]: 2025-12-01 10:24:36.361 256155 DEBUG oslo_concurrency.processutils [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:24:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:24:36.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:24:36.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:36 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1020: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:24:36 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:24:36 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3348997553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:24:36 np0005540825 nova_compute[256151]: 2025-12-01 10:24:36.860 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:36 np0005540825 nova_compute[256151]: 2025-12-01 10:24:36.876 256155 DEBUG oslo_concurrency.processutils [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:24:36 np0005540825 nova_compute[256151]: 2025-12-01 10:24:36.883 256155 DEBUG nova.compute.provider_tree [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Inventory has not changed in ProviderTree for provider: 5efe20fe-1981-4bd9-8786-d9fddc89a5ae update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 05:24:36 np0005540825 nova_compute[256151]: 2025-12-01 10:24:36.899 256155 DEBUG nova.scheduler.client.report [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Inventory has not changed for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 05:24:36 np0005540825 nova_compute[256151]: 2025-12-01 10:24:36.929 256155 DEBUG oslo_concurrency.lockutils [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:24:36 np0005540825 nova_compute[256151]: 2025-12-01 10:24:36.931 256155 DEBUG nova.compute.manager [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 05:24:36 np0005540825 nova_compute[256151]: 2025-12-01 10:24:36.992 256155 DEBUG nova.compute.manager [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 05:24:36 np0005540825 nova_compute[256151]: 2025-12-01 10:24:36.993 256155 DEBUG nova.network.neutron [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 05:24:37 np0005540825 nova_compute[256151]: 2025-12-01 10:24:37.049 256155 INFO nova.virt.libvirt.driver [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 05:24:37 np0005540825 nova_compute[256151]: 2025-12-01 10:24:37.073 256155 DEBUG nova.compute.manager [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 05:24:37 np0005540825 nova_compute[256151]: 2025-12-01 10:24:37.196 256155 DEBUG nova.compute.manager [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 05:24:37 np0005540825 nova_compute[256151]: 2025-12-01 10:24:37.197 256155 DEBUG nova.virt.libvirt.driver [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 05:24:37 np0005540825 nova_compute[256151]: 2025-12-01 10:24:37.198 256155 INFO nova.virt.libvirt.driver [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Creating image(s)#033[00m
Dec  1 05:24:37 np0005540825 nova_compute[256151]: 2025-12-01 10:24:37.224 256155 DEBUG nova.storage.rbd_utils [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] rbd image dd56af67-ae91-4891-b152-ac9a0f325fc5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  1 05:24:37 np0005540825 podman[276239]: 2025-12-01 10:24:37.23121373 +0000 UTC m=+0.091185461 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  1 05:24:37 np0005540825 nova_compute[256151]: 2025-12-01 10:24:37.254 256155 DEBUG nova.storage.rbd_utils [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] rbd image dd56af67-ae91-4891-b152-ac9a0f325fc5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  1 05:24:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:24:37.259Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:24:37 np0005540825 nova_compute[256151]: 2025-12-01 10:24:37.286 256155 DEBUG nova.storage.rbd_utils [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] rbd image dd56af67-ae91-4891-b152-ac9a0f325fc5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  1 05:24:37 np0005540825 nova_compute[256151]: 2025-12-01 10:24:37.290 256155 DEBUG oslo_concurrency.processutils [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/caad95fa2cc8ed03bed2e9851744954b07ec7b34 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:24:37 np0005540825 nova_compute[256151]: 2025-12-01 10:24:37.359 256155 DEBUG oslo_concurrency.processutils [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/caad95fa2cc8ed03bed2e9851744954b07ec7b34 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:24:37 np0005540825 nova_compute[256151]: 2025-12-01 10:24:37.360 256155 DEBUG oslo_concurrency.lockutils [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "caad95fa2cc8ed03bed2e9851744954b07ec7b34" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:24:37 np0005540825 nova_compute[256151]: 2025-12-01 10:24:37.360 256155 DEBUG oslo_concurrency.lockutils [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "caad95fa2cc8ed03bed2e9851744954b07ec7b34" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:24:37 np0005540825 nova_compute[256151]: 2025-12-01 10:24:37.361 256155 DEBUG oslo_concurrency.lockutils [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "caad95fa2cc8ed03bed2e9851744954b07ec7b34" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:24:37 np0005540825 nova_compute[256151]: 2025-12-01 10:24:37.391 256155 DEBUG nova.storage.rbd_utils [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] rbd image dd56af67-ae91-4891-b152-ac9a0f325fc5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  1 05:24:37 np0005540825 nova_compute[256151]: 2025-12-01 10:24:37.393 256155 DEBUG oslo_concurrency.processutils [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/caad95fa2cc8ed03bed2e9851744954b07ec7b34 dd56af67-ae91-4891-b152-ac9a0f325fc5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:24:37 np0005540825 nova_compute[256151]: 2025-12-01 10:24:37.725 256155 DEBUG oslo_concurrency.processutils [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/caad95fa2cc8ed03bed2e9851744954b07ec7b34 dd56af67-ae91-4891-b152-ac9a0f325fc5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.332s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:24:37 np0005540825 nova_compute[256151]: 2025-12-01 10:24:37.815 256155 DEBUG nova.storage.rbd_utils [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] resizing rbd image dd56af67-ae91-4891-b152-ac9a0f325fc5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  1 05:24:37 np0005540825 nova_compute[256151]: 2025-12-01 10:24:37.899 256155 DEBUG nova.policy [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5b56a238daf0445798410e51caada0ff', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9f6be4e572624210b91193c011607c08', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  1 05:24:37 np0005540825 nova_compute[256151]: 2025-12-01 10:24:37.951 256155 DEBUG nova.objects.instance [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lazy-loading 'migration_context' on Instance uuid dd56af67-ae91-4891-b152-ac9a0f325fc5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 05:24:37 np0005540825 nova_compute[256151]: 2025-12-01 10:24:37.968 256155 DEBUG nova.virt.libvirt.driver [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 05:24:37 np0005540825 nova_compute[256151]: 2025-12-01 10:24:37.969 256155 DEBUG nova.virt.libvirt.driver [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Ensure instance console log exists: /var/lib/nova/instances/dd56af67-ae91-4891-b152-ac9a0f325fc5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 05:24:37 np0005540825 nova_compute[256151]: 2025-12-01 10:24:37.970 256155 DEBUG oslo_concurrency.lockutils [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:24:37 np0005540825 nova_compute[256151]: 2025-12-01 10:24:37.970 256155 DEBUG oslo_concurrency.lockutils [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:24:37 np0005540825 nova_compute[256151]: 2025-12-01 10:24:37.970 256155 DEBUG oslo_concurrency.lockutils [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:24:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:24:38.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:24:38.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:38 np0005540825 nova_compute[256151]: 2025-12-01 10:24:38.629 256155 DEBUG nova.network.neutron [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Successfully created port: 80410344-d9b7-4cc9-a8bc-ee566d46d0e4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  1 05:24:38 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1021: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:24:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:24:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:24:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:24:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:39 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:24:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_10:24:39
Dec  1 05:24:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 05:24:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 05:24:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', '.nfs', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes', '.rgw.root', 'vms', 'images', '.mgr', 'default.rgw.log']
Dec  1 05:24:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 05:24:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:24:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:24:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:24:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:24:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:24:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:24:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:24:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:24:40 np0005540825 nova_compute[256151]: 2025-12-01 10:24:40.002 256155 DEBUG nova.network.neutron [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Successfully updated port: 80410344-d9b7-4cc9-a8bc-ee566d46d0e4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 05:24:40 np0005540825 nova_compute[256151]: 2025-12-01 10:24:40.017 256155 DEBUG oslo_concurrency.lockutils [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "refresh_cache-dd56af67-ae91-4891-b152-ac9a0f325fc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 05:24:40 np0005540825 nova_compute[256151]: 2025-12-01 10:24:40.017 256155 DEBUG oslo_concurrency.lockutils [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquired lock "refresh_cache-dd56af67-ae91-4891-b152-ac9a0f325fc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 05:24:40 np0005540825 nova_compute[256151]: 2025-12-01 10:24:40.018 256155 DEBUG nova.network.neutron [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 05:24:40 np0005540825 nova_compute[256151]: 2025-12-01 10:24:40.082 256155 DEBUG nova.compute.manager [req-28bc42d2-b6f6-4c2c-b360-de258004aa25 req-9b29f877-ece2-4850-aaa5-e9c18aac3b58 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Received event network-changed-80410344-d9b7-4cc9-a8bc-ee566d46d0e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:24:40 np0005540825 nova_compute[256151]: 2025-12-01 10:24:40.083 256155 DEBUG nova.compute.manager [req-28bc42d2-b6f6-4c2c-b360-de258004aa25 req-9b29f877-ece2-4850-aaa5-e9c18aac3b58 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Refreshing instance network info cache due to event network-changed-80410344-d9b7-4cc9-a8bc-ee566d46d0e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 05:24:40 np0005540825 nova_compute[256151]: 2025-12-01 10:24:40.084 256155 DEBUG oslo_concurrency.lockutils [req-28bc42d2-b6f6-4c2c-b360-de258004aa25 req-9b29f877-ece2-4850-aaa5-e9c18aac3b58 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "refresh_cache-dd56af67-ae91-4891-b152-ac9a0f325fc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 05:24:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 05:24:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 05:24:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:24:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 05:24:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:24:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:24:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:24:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:24:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:24:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:24:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:24:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:24:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:24:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:24:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:24:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  1 05:24:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:24:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:24:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:24:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:24:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:24:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:24:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:24:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:24:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:24:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:24:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:24:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:24:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:24:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:24:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:24:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:24:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:24:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:24:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:24:40 np0005540825 nova_compute[256151]: 2025-12-01 10:24:40.137 256155 DEBUG nova.network.neutron [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 05:24:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:24:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:24:40.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:24:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:24:40.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:24:40 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1022: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  1 05:24:41 np0005540825 nova_compute[256151]: 2025-12-01 10:24:41.049 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:41 np0005540825 nova_compute[256151]: 2025-12-01 10:24:41.112 256155 DEBUG nova.network.neutron [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Updating instance_info_cache with network_info: [{"id": "80410344-d9b7-4cc9-a8bc-ee566d46d0e4", "address": "fa:16:3e:bd:ef:f0", "network": {"id": "82ec8f83-684f-44ae-8389-122bf8ed45ab", "bridge": "br-int", "label": "tempest-network-smoke--115101625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80410344-d9", "ovs_interfaceid": "80410344-d9b7-4cc9-a8bc-ee566d46d0e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 05:24:41 np0005540825 nova_compute[256151]: 2025-12-01 10:24:41.139 256155 DEBUG oslo_concurrency.lockutils [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Releasing lock "refresh_cache-dd56af67-ae91-4891-b152-ac9a0f325fc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 05:24:41 np0005540825 nova_compute[256151]: 2025-12-01 10:24:41.140 256155 DEBUG nova.compute.manager [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Instance network_info: |[{"id": "80410344-d9b7-4cc9-a8bc-ee566d46d0e4", "address": "fa:16:3e:bd:ef:f0", "network": {"id": "82ec8f83-684f-44ae-8389-122bf8ed45ab", "bridge": "br-int", "label": "tempest-network-smoke--115101625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80410344-d9", "ovs_interfaceid": "80410344-d9b7-4cc9-a8bc-ee566d46d0e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 05:24:41 np0005540825 nova_compute[256151]: 2025-12-01 10:24:41.140 256155 DEBUG oslo_concurrency.lockutils [req-28bc42d2-b6f6-4c2c-b360-de258004aa25 req-9b29f877-ece2-4850-aaa5-e9c18aac3b58 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquired lock "refresh_cache-dd56af67-ae91-4891-b152-ac9a0f325fc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 05:24:41 np0005540825 nova_compute[256151]: 2025-12-01 10:24:41.141 256155 DEBUG nova.network.neutron [req-28bc42d2-b6f6-4c2c-b360-de258004aa25 req-9b29f877-ece2-4850-aaa5-e9c18aac3b58 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Refreshing network info cache for port 80410344-d9b7-4cc9-a8bc-ee566d46d0e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 05:24:41 np0005540825 nova_compute[256151]: 2025-12-01 10:24:41.146 256155 DEBUG nova.virt.libvirt.driver [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Start _get_guest_xml network_info=[{"id": "80410344-d9b7-4cc9-a8bc-ee566d46d0e4", "address": "fa:16:3e:bd:ef:f0", "network": {"id": "82ec8f83-684f-44ae-8389-122bf8ed45ab", "bridge": "br-int", "label": "tempest-network-smoke--115101625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80410344-d9", "ovs_interfaceid": "80410344-d9b7-4cc9-a8bc-ee566d46d0e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T10:14:19Z,direct_url=<?>,disk_format='qcow2',id=8f75d6de-6ce0-44e1-b417-d0111424475b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9a5734898a6345909986f17ddf57b27d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T10:14:22Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'encryption_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': '8f75d6de-6ce0-44e1-b417-d0111424475b'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 05:24:41 np0005540825 nova_compute[256151]: 2025-12-01 10:24:41.156 256155 WARNING nova.virt.libvirt.driver [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 05:24:41 np0005540825 nova_compute[256151]: 2025-12-01 10:24:41.168 256155 DEBUG nova.virt.libvirt.host [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 05:24:41 np0005540825 nova_compute[256151]: 2025-12-01 10:24:41.169 256155 DEBUG nova.virt.libvirt.host [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 05:24:41 np0005540825 nova_compute[256151]: 2025-12-01 10:24:41.173 256155 DEBUG nova.virt.libvirt.host [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 05:24:41 np0005540825 nova_compute[256151]: 2025-12-01 10:24:41.174 256155 DEBUG nova.virt.libvirt.host [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 05:24:41 np0005540825 nova_compute[256151]: 2025-12-01 10:24:41.175 256155 DEBUG nova.virt.libvirt.driver [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 05:24:41 np0005540825 nova_compute[256151]: 2025-12-01 10:24:41.175 256155 DEBUG nova.virt.hardware [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T10:14:17Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2e731827-1896-49cd-b0cc-12903555d217',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T10:14:19Z,direct_url=<?>,disk_format='qcow2',id=8f75d6de-6ce0-44e1-b417-d0111424475b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9a5734898a6345909986f17ddf57b27d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T10:14:22Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 05:24:41 np0005540825 nova_compute[256151]: 2025-12-01 10:24:41.176 256155 DEBUG nova.virt.hardware [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 05:24:41 np0005540825 nova_compute[256151]: 2025-12-01 10:24:41.176 256155 DEBUG nova.virt.hardware [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 05:24:41 np0005540825 nova_compute[256151]: 2025-12-01 10:24:41.177 256155 DEBUG nova.virt.hardware [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 05:24:41 np0005540825 nova_compute[256151]: 2025-12-01 10:24:41.177 256155 DEBUG nova.virt.hardware [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 05:24:41 np0005540825 nova_compute[256151]: 2025-12-01 10:24:41.178 256155 DEBUG nova.virt.hardware [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 05:24:41 np0005540825 nova_compute[256151]: 2025-12-01 10:24:41.178 256155 DEBUG nova.virt.hardware [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 05:24:41 np0005540825 nova_compute[256151]: 2025-12-01 10:24:41.178 256155 DEBUG nova.virt.hardware [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 05:24:41 np0005540825 nova_compute[256151]: 2025-12-01 10:24:41.179 256155 DEBUG nova.virt.hardware [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 05:24:41 np0005540825 nova_compute[256151]: 2025-12-01 10:24:41.179 256155 DEBUG nova.virt.hardware [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 05:24:41 np0005540825 nova_compute[256151]: 2025-12-01 10:24:41.180 256155 DEBUG nova.virt.hardware [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 05:24:41 np0005540825 nova_compute[256151]: 2025-12-01 10:24:41.184 256155 DEBUG oslo_concurrency.processutils [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:24:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:24:41] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Dec  1 05:24:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:24:41] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Dec  1 05:24:41 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  1 05:24:41 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4221105705' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  1 05:24:41 np0005540825 nova_compute[256151]: 2025-12-01 10:24:41.721 256155 DEBUG oslo_concurrency.processutils [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:24:41 np0005540825 nova_compute[256151]: 2025-12-01 10:24:41.768 256155 DEBUG nova.storage.rbd_utils [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] rbd image dd56af67-ae91-4891-b152-ac9a0f325fc5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  1 05:24:41 np0005540825 nova_compute[256151]: 2025-12-01 10:24:41.774 256155 DEBUG oslo_concurrency.processutils [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:24:41 np0005540825 nova_compute[256151]: 2025-12-01 10:24:41.911 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:42 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  1 05:24:42 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1188300432' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  1 05:24:42 np0005540825 nova_compute[256151]: 2025-12-01 10:24:42.285 256155 DEBUG oslo_concurrency.processutils [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:24:42 np0005540825 nova_compute[256151]: 2025-12-01 10:24:42.286 256155 DEBUG nova.virt.libvirt.vif [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T10:24:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1744736494',display_name='tempest-TestNetworkBasicOps-server-1744736494',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1744736494',id=11,image_ref='8f75d6de-6ce0-44e1-b417-d0111424475b',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFEtm0tPdDT/qfCstlsxaIuU7F73TYcccr1SL0AFFhbSP6QyY3W7FSBEr169NqnltBPMCF/mGTi3JWFSUnlZAo+KOT76m6a5IiHBdDTIPsf63wASE4wAGvguH8uhatHBgg==',key_name='tempest-TestNetworkBasicOps-1704588061',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9f6be4e572624210b91193c011607c08',ramdisk_id='',reservation_id='r-b57vd3uf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8f75d6de-6ce0-44e1-b417-d0111424475b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1248115384',owner_user_name='tempest-TestNetworkBasicOps-1248115384-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T10:24:37Z,user_data=None,user_id='5b56a238daf0445798410e51caada0ff',uuid=dd56af67-ae91-4891-b152-ac9a0f325fc5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "80410344-d9b7-4cc9-a8bc-ee566d46d0e4", "address": "fa:16:3e:bd:ef:f0", "network": {"id": "82ec8f83-684f-44ae-8389-122bf8ed45ab", "bridge": "br-int", "label": "tempest-network-smoke--115101625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80410344-d9", "ovs_interfaceid": "80410344-d9b7-4cc9-a8bc-ee566d46d0e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 05:24:42 np0005540825 nova_compute[256151]: 2025-12-01 10:24:42.287 256155 DEBUG nova.network.os_vif_util [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Converting VIF {"id": "80410344-d9b7-4cc9-a8bc-ee566d46d0e4", "address": "fa:16:3e:bd:ef:f0", "network": {"id": "82ec8f83-684f-44ae-8389-122bf8ed45ab", "bridge": "br-int", "label": "tempest-network-smoke--115101625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80410344-d9", "ovs_interfaceid": "80410344-d9b7-4cc9-a8bc-ee566d46d0e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 05:24:42 np0005540825 nova_compute[256151]: 2025-12-01 10:24:42.288 256155 DEBUG nova.network.os_vif_util [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:ef:f0,bridge_name='br-int',has_traffic_filtering=True,id=80410344-d9b7-4cc9-a8bc-ee566d46d0e4,network=Network(82ec8f83-684f-44ae-8389-122bf8ed45ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap80410344-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 05:24:42 np0005540825 nova_compute[256151]: 2025-12-01 10:24:42.289 256155 DEBUG nova.objects.instance [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lazy-loading 'pci_devices' on Instance uuid dd56af67-ae91-4891-b152-ac9a0f325fc5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 05:24:42 np0005540825 nova_compute[256151]: 2025-12-01 10:24:42.309 256155 DEBUG nova.virt.libvirt.driver [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] End _get_guest_xml xml=<domain type="kvm">
Dec  1 05:24:42 np0005540825 nova_compute[256151]:  <uuid>dd56af67-ae91-4891-b152-ac9a0f325fc5</uuid>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:  <name>instance-0000000b</name>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:  <memory>131072</memory>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:  <vcpu>1</vcpu>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:  <metadata>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 05:24:42 np0005540825 nova_compute[256151]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:      <nova:name>tempest-TestNetworkBasicOps-server-1744736494</nova:name>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:      <nova:creationTime>2025-12-01 10:24:41</nova:creationTime>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:      <nova:flavor name="m1.nano">
Dec  1 05:24:42 np0005540825 nova_compute[256151]:        <nova:memory>128</nova:memory>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:        <nova:disk>1</nova:disk>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:        <nova:swap>0</nova:swap>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:        <nova:ephemeral>0</nova:ephemeral>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:        <nova:vcpus>1</nova:vcpus>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:      </nova:flavor>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:      <nova:owner>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:        <nova:user uuid="5b56a238daf0445798410e51caada0ff">tempest-TestNetworkBasicOps-1248115384-project-member</nova:user>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:        <nova:project uuid="9f6be4e572624210b91193c011607c08">tempest-TestNetworkBasicOps-1248115384</nova:project>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:      </nova:owner>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:      <nova:root type="image" uuid="8f75d6de-6ce0-44e1-b417-d0111424475b"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:      <nova:ports>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:        <nova:port uuid="80410344-d9b7-4cc9-a8bc-ee566d46d0e4">
Dec  1 05:24:42 np0005540825 nova_compute[256151]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:        </nova:port>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:      </nova:ports>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    </nova:instance>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:  </metadata>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:  <sysinfo type="smbios">
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <system>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:      <entry name="manufacturer">RDO</entry>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:      <entry name="product">OpenStack Compute</entry>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:      <entry name="serial">dd56af67-ae91-4891-b152-ac9a0f325fc5</entry>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:      <entry name="uuid">dd56af67-ae91-4891-b152-ac9a0f325fc5</entry>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:      <entry name="family">Virtual Machine</entry>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    </system>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:  </sysinfo>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:  <os>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <boot dev="hd"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <smbios mode="sysinfo"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:  </os>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:  <features>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <acpi/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <apic/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <vmcoreinfo/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:  </features>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:  <clock offset="utc">
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <timer name="hpet" present="no"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:  </clock>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:  <cpu mode="host-model" match="exact">
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:  </cpu>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:  <devices>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <disk type="network" device="disk">
Dec  1 05:24:42 np0005540825 nova_compute[256151]:      <driver type="raw" cache="none"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:      <source protocol="rbd" name="vms/dd56af67-ae91-4891-b152-ac9a0f325fc5_disk">
Dec  1 05:24:42 np0005540825 nova_compute[256151]:        <host name="192.168.122.100" port="6789"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:        <host name="192.168.122.102" port="6789"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:        <host name="192.168.122.101" port="6789"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:      </source>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:      <auth username="openstack">
Dec  1 05:24:42 np0005540825 nova_compute[256151]:        <secret type="ceph" uuid="365f19c2-81e5-5edd-b6b4-280555214d3a"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:      </auth>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:      <target dev="vda" bus="virtio"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    </disk>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <disk type="network" device="cdrom">
Dec  1 05:24:42 np0005540825 nova_compute[256151]:      <driver type="raw" cache="none"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:      <source protocol="rbd" name="vms/dd56af67-ae91-4891-b152-ac9a0f325fc5_disk.config">
Dec  1 05:24:42 np0005540825 nova_compute[256151]:        <host name="192.168.122.100" port="6789"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:        <host name="192.168.122.102" port="6789"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:        <host name="192.168.122.101" port="6789"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:      </source>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:      <auth username="openstack">
Dec  1 05:24:42 np0005540825 nova_compute[256151]:        <secret type="ceph" uuid="365f19c2-81e5-5edd-b6b4-280555214d3a"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:      </auth>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:      <target dev="sda" bus="sata"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    </disk>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <interface type="ethernet">
Dec  1 05:24:42 np0005540825 nova_compute[256151]:      <mac address="fa:16:3e:bd:ef:f0"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:      <model type="virtio"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:      <mtu size="1442"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:      <target dev="tap80410344-d9"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    </interface>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <serial type="pty">
Dec  1 05:24:42 np0005540825 nova_compute[256151]:      <log file="/var/lib/nova/instances/dd56af67-ae91-4891-b152-ac9a0f325fc5/console.log" append="off"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    </serial>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <video>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:      <model type="virtio"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    </video>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <input type="tablet" bus="usb"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <rng model="virtio">
Dec  1 05:24:42 np0005540825 nova_compute[256151]:      <backend model="random">/dev/urandom</backend>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    </rng>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <controller type="usb" index="0"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    <memballoon model="virtio">
Dec  1 05:24:42 np0005540825 nova_compute[256151]:      <stats period="10"/>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:    </memballoon>
Dec  1 05:24:42 np0005540825 nova_compute[256151]:  </devices>
Dec  1 05:24:42 np0005540825 nova_compute[256151]: </domain>
Dec  1 05:24:42 np0005540825 nova_compute[256151]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 05:24:42 np0005540825 nova_compute[256151]: 2025-12-01 10:24:42.310 256155 DEBUG nova.compute.manager [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Preparing to wait for external event network-vif-plugged-80410344-d9b7-4cc9-a8bc-ee566d46d0e4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 05:24:42 np0005540825 nova_compute[256151]: 2025-12-01 10:24:42.311 256155 DEBUG oslo_concurrency.lockutils [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "dd56af67-ae91-4891-b152-ac9a0f325fc5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:24:42 np0005540825 nova_compute[256151]: 2025-12-01 10:24:42.311 256155 DEBUG oslo_concurrency.lockutils [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "dd56af67-ae91-4891-b152-ac9a0f325fc5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:24:42 np0005540825 nova_compute[256151]: 2025-12-01 10:24:42.311 256155 DEBUG oslo_concurrency.lockutils [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "dd56af67-ae91-4891-b152-ac9a0f325fc5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:24:42 np0005540825 nova_compute[256151]: 2025-12-01 10:24:42.312 256155 DEBUG nova.virt.libvirt.vif [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T10:24:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1744736494',display_name='tempest-TestNetworkBasicOps-server-1744736494',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1744736494',id=11,image_ref='8f75d6de-6ce0-44e1-b417-d0111424475b',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFEtm0tPdDT/qfCstlsxaIuU7F73TYcccr1SL0AFFhbSP6QyY3W7FSBEr169NqnltBPMCF/mGTi3JWFSUnlZAo+KOT76m6a5IiHBdDTIPsf63wASE4wAGvguH8uhatHBgg==',key_name='tempest-TestNetworkBasicOps-1704588061',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9f6be4e572624210b91193c011607c08',ramdisk_id='',reservation_id='r-b57vd3uf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8f75d6de-6ce0-44e1-b417-d0111424475b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1248115384',owner_user_name='tempest-TestNetworkBasicOps-1248115384-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T10:24:37Z,user_data=None,user_id='5b56a238daf0445798410e51caada0ff',uuid=dd56af67-ae91-4891-b152-ac9a0f325fc5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "80410344-d9b7-4cc9-a8bc-ee566d46d0e4", "address": "fa:16:3e:bd:ef:f0", "network": {"id": "82ec8f83-684f-44ae-8389-122bf8ed45ab", "bridge": "br-int", "label": "tempest-network-smoke--115101625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80410344-d9", "ovs_interfaceid": "80410344-d9b7-4cc9-a8bc-ee566d46d0e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 05:24:42 np0005540825 nova_compute[256151]: 2025-12-01 10:24:42.313 256155 DEBUG nova.network.os_vif_util [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Converting VIF {"id": "80410344-d9b7-4cc9-a8bc-ee566d46d0e4", "address": "fa:16:3e:bd:ef:f0", "network": {"id": "82ec8f83-684f-44ae-8389-122bf8ed45ab", "bridge": "br-int", "label": "tempest-network-smoke--115101625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80410344-d9", "ovs_interfaceid": "80410344-d9b7-4cc9-a8bc-ee566d46d0e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 05:24:42 np0005540825 nova_compute[256151]: 2025-12-01 10:24:42.313 256155 DEBUG nova.network.os_vif_util [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:ef:f0,bridge_name='br-int',has_traffic_filtering=True,id=80410344-d9b7-4cc9-a8bc-ee566d46d0e4,network=Network(82ec8f83-684f-44ae-8389-122bf8ed45ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap80410344-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 05:24:42 np0005540825 nova_compute[256151]: 2025-12-01 10:24:42.314 256155 DEBUG os_vif [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:ef:f0,bridge_name='br-int',has_traffic_filtering=True,id=80410344-d9b7-4cc9-a8bc-ee566d46d0e4,network=Network(82ec8f83-684f-44ae-8389-122bf8ed45ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap80410344-d9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 05:24:42 np0005540825 nova_compute[256151]: 2025-12-01 10:24:42.314 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:42 np0005540825 nova_compute[256151]: 2025-12-01 10:24:42.315 256155 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:24:42 np0005540825 nova_compute[256151]: 2025-12-01 10:24:42.315 256155 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 05:24:42 np0005540825 nova_compute[256151]: 2025-12-01 10:24:42.318 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:42 np0005540825 nova_compute[256151]: 2025-12-01 10:24:42.318 256155 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap80410344-d9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:24:42 np0005540825 nova_compute[256151]: 2025-12-01 10:24:42.319 256155 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap80410344-d9, col_values=(('external_ids', {'iface-id': '80410344-d9b7-4cc9-a8bc-ee566d46d0e4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bd:ef:f0', 'vm-uuid': 'dd56af67-ae91-4891-b152-ac9a0f325fc5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:24:42 np0005540825 nova_compute[256151]: 2025-12-01 10:24:42.320 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:42 np0005540825 NetworkManager[48963]: <info>  [1764584682.3225] manager: (tap80410344-d9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Dec  1 05:24:42 np0005540825 nova_compute[256151]: 2025-12-01 10:24:42.322 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 05:24:42 np0005540825 nova_compute[256151]: 2025-12-01 10:24:42.327 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:42 np0005540825 nova_compute[256151]: 2025-12-01 10:24:42.328 256155 INFO os_vif [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:ef:f0,bridge_name='br-int',has_traffic_filtering=True,id=80410344-d9b7-4cc9-a8bc-ee566d46d0e4,network=Network(82ec8f83-684f-44ae-8389-122bf8ed45ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap80410344-d9')#033[00m
Dec  1 05:24:42 np0005540825 nova_compute[256151]: 2025-12-01 10:24:42.390 256155 DEBUG nova.virt.libvirt.driver [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 05:24:42 np0005540825 nova_compute[256151]: 2025-12-01 10:24:42.391 256155 DEBUG nova.virt.libvirt.driver [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 05:24:42 np0005540825 nova_compute[256151]: 2025-12-01 10:24:42.391 256155 DEBUG nova.virt.libvirt.driver [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] No VIF found with MAC fa:16:3e:bd:ef:f0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 05:24:42 np0005540825 nova_compute[256151]: 2025-12-01 10:24:42.391 256155 INFO nova.virt.libvirt.driver [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Using config drive#033[00m
Dec  1 05:24:42 np0005540825 nova_compute[256151]: 2025-12-01 10:24:42.418 256155 DEBUG nova.storage.rbd_utils [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] rbd image dd56af67-ae91-4891-b152-ac9a0f325fc5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  1 05:24:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:24:42.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:24:42.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:42 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1023: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  1 05:24:43 np0005540825 nova_compute[256151]: 2025-12-01 10:24:43.031 256155 INFO nova.virt.libvirt.driver [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Creating config drive at /var/lib/nova/instances/dd56af67-ae91-4891-b152-ac9a0f325fc5/disk.config#033[00m
Dec  1 05:24:43 np0005540825 nova_compute[256151]: 2025-12-01 10:24:43.041 256155 DEBUG oslo_concurrency.processutils [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dd56af67-ae91-4891-b152-ac9a0f325fc5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoyfn1z5t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:24:43 np0005540825 nova_compute[256151]: 2025-12-01 10:24:43.186 256155 DEBUG oslo_concurrency.processutils [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dd56af67-ae91-4891-b152-ac9a0f325fc5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoyfn1z5t" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:24:43 np0005540825 nova_compute[256151]: 2025-12-01 10:24:43.234 256155 DEBUG nova.storage.rbd_utils [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] rbd image dd56af67-ae91-4891-b152-ac9a0f325fc5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  1 05:24:43 np0005540825 nova_compute[256151]: 2025-12-01 10:24:43.239 256155 DEBUG oslo_concurrency.processutils [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dd56af67-ae91-4891-b152-ac9a0f325fc5/disk.config dd56af67-ae91-4891-b152-ac9a0f325fc5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:24:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:24:43.704Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:24:43 np0005540825 nova_compute[256151]: 2025-12-01 10:24:43.707 256155 DEBUG oslo_concurrency.processutils [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dd56af67-ae91-4891-b152-ac9a0f325fc5/disk.config dd56af67-ae91-4891-b152-ac9a0f325fc5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:24:43 np0005540825 nova_compute[256151]: 2025-12-01 10:24:43.708 256155 INFO nova.virt.libvirt.driver [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Deleting local config drive /var/lib/nova/instances/dd56af67-ae91-4891-b152-ac9a0f325fc5/disk.config because it was imported into RBD.#033[00m
Dec  1 05:24:43 np0005540825 kernel: tap80410344-d9: entered promiscuous mode
Dec  1 05:24:43 np0005540825 NetworkManager[48963]: <info>  [1764584683.7662] manager: (tap80410344-d9): new Tun device (/org/freedesktop/NetworkManager/Devices/55)
Dec  1 05:24:43 np0005540825 ovn_controller[153404]: 2025-12-01T10:24:43Z|00079|binding|INFO|Claiming lport 80410344-d9b7-4cc9-a8bc-ee566d46d0e4 for this chassis.
Dec  1 05:24:43 np0005540825 ovn_controller[153404]: 2025-12-01T10:24:43Z|00080|binding|INFO|80410344-d9b7-4cc9-a8bc-ee566d46d0e4: Claiming fa:16:3e:bd:ef:f0 10.100.0.11
Dec  1 05:24:43 np0005540825 nova_compute[256151]: 2025-12-01 10:24:43.769 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:43 np0005540825 nova_compute[256151]: 2025-12-01 10:24:43.774 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:43.787 163291 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:ef:f0 10.100.0.11'], port_security=['fa:16:3e:bd:ef:f0 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'dd56af67-ae91-4891-b152-ac9a0f325fc5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-82ec8f83-684f-44ae-8389-122bf8ed45ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f6be4e572624210b91193c011607c08', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2936a540-1cab-4590-a9db-6bce6aab5d9e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd915a5f-666a-4c2a-9612-6191ae438030, chassis=[<ovs.db.idl.Row object at 0x7f3429b436d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3429b436d0>], logical_port=80410344-d9b7-4cc9-a8bc-ee566d46d0e4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 05:24:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:43.789 163291 INFO neutron.agent.ovn.metadata.agent [-] Port 80410344-d9b7-4cc9-a8bc-ee566d46d0e4 in datapath 82ec8f83-684f-44ae-8389-122bf8ed45ab bound to our chassis#033[00m
Dec  1 05:24:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:43.791 163291 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 82ec8f83-684f-44ae-8389-122bf8ed45ab#033[00m
Dec  1 05:24:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:43.805 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[d9fc2e6e-fc37-4a07-90de-7eaf06190e52]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:24:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:43.806 163291 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap82ec8f83-61 in ovnmeta-82ec8f83-684f-44ae-8389-122bf8ed45ab namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  1 05:24:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:43.808 262668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap82ec8f83-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  1 05:24:43 np0005540825 systemd-machined[216307]: New machine qemu-6-instance-0000000b.
Dec  1 05:24:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:43.808 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[29a6b643-5f3f-43a7-b78d-e875f7b7684f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:24:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:43.810 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[de219b81-10e4-4e58-9ec0-754a752679ab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:24:43 np0005540825 systemd-udevd[276570]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 05:24:43 np0005540825 NetworkManager[48963]: <info>  [1764584683.8274] device (tap80410344-d9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 05:24:43 np0005540825 NetworkManager[48963]: <info>  [1764584683.8291] device (tap80410344-d9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 05:24:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:43.829 163408 DEBUG oslo.privsep.daemon [-] privsep: reply[e162fa93-6488-4f38-b7d3-40ee7fa90ef7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:24:43 np0005540825 systemd[1]: Started Virtual Machine qemu-6-instance-0000000b.
Dec  1 05:24:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:43.855 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[bcd20174-e7e7-4a95-b374-f3e5fb416b59]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:24:43 np0005540825 nova_compute[256151]: 2025-12-01 10:24:43.856 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:43 np0005540825 nova_compute[256151]: 2025-12-01 10:24:43.859 256155 DEBUG nova.network.neutron [req-28bc42d2-b6f6-4c2c-b360-de258004aa25 req-9b29f877-ece2-4850-aaa5-e9c18aac3b58 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Updated VIF entry in instance network info cache for port 80410344-d9b7-4cc9-a8bc-ee566d46d0e4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 05:24:43 np0005540825 nova_compute[256151]: 2025-12-01 10:24:43.859 256155 DEBUG nova.network.neutron [req-28bc42d2-b6f6-4c2c-b360-de258004aa25 req-9b29f877-ece2-4850-aaa5-e9c18aac3b58 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Updating instance_info_cache with network_info: [{"id": "80410344-d9b7-4cc9-a8bc-ee566d46d0e4", "address": "fa:16:3e:bd:ef:f0", "network": {"id": "82ec8f83-684f-44ae-8389-122bf8ed45ab", "bridge": "br-int", "label": "tempest-network-smoke--115101625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80410344-d9", "ovs_interfaceid": "80410344-d9b7-4cc9-a8bc-ee566d46d0e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 05:24:43 np0005540825 ovn_controller[153404]: 2025-12-01T10:24:43Z|00081|binding|INFO|Setting lport 80410344-d9b7-4cc9-a8bc-ee566d46d0e4 ovn-installed in OVS
Dec  1 05:24:43 np0005540825 ovn_controller[153404]: 2025-12-01T10:24:43Z|00082|binding|INFO|Setting lport 80410344-d9b7-4cc9-a8bc-ee566d46d0e4 up in Southbound
Dec  1 05:24:43 np0005540825 nova_compute[256151]: 2025-12-01 10:24:43.864 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:43 np0005540825 nova_compute[256151]: 2025-12-01 10:24:43.878 256155 DEBUG oslo_concurrency.lockutils [req-28bc42d2-b6f6-4c2c-b360-de258004aa25 req-9b29f877-ece2-4850-aaa5-e9c18aac3b58 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Releasing lock "refresh_cache-dd56af67-ae91-4891-b152-ac9a0f325fc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 05:24:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:43.887 262728 DEBUG oslo.privsep.daemon [-] privsep: reply[82048f4d-fc8e-4fd1-8048-735e07f7d2e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:24:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:43.892 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[b5c2e21e-f064-4668-9547-0979c755becf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:24:43 np0005540825 NetworkManager[48963]: <info>  [1764584683.8932] manager: (tap82ec8f83-60): new Veth device (/org/freedesktop/NetworkManager/Devices/56)
Dec  1 05:24:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:43.919 262728 DEBUG oslo.privsep.daemon [-] privsep: reply[a7686790-4602-4f1a-9aaf-3c065c2ff814]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:24:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:43.923 262728 DEBUG oslo.privsep.daemon [-] privsep: reply[9d504048-0339-4d3c-a990-ae61a3134c69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:24:43 np0005540825 NetworkManager[48963]: <info>  [1764584683.9448] device (tap82ec8f83-60): carrier: link connected
Dec  1 05:24:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:43.948 262728 DEBUG oslo.privsep.daemon [-] privsep: reply[1b2b4fdd-57c5-4ccf-be77-0fb18781fb6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:24:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:43.965 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[ad2a9ea4-c6ab-4794-824a-9d7fc48a00b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap82ec8f83-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:e9:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455257, 'reachable_time': 39858, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276602, 'error': None, 'target': 'ovnmeta-82ec8f83-684f-44ae-8389-122bf8ed45ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:24:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:43.983 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[f07fe658-6aa0-4a04-b243-55d2ac18c778]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea5:e912'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455257, 'tstamp': 455257}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276603, 'error': None, 'target': 'ovnmeta-82ec8f83-684f-44ae-8389-122bf8ed45ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:24:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:24:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:24:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:24:44 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:43.999 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[23589884-038d-4276-8bdd-0b4113974c66]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap82ec8f83-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:e9:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455257, 'reachable_time': 39858, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 276604, 'error': None, 'target': 'ovnmeta-82ec8f83-684f-44ae-8389-122bf8ed45ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:24:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:44 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:24:44 np0005540825 nova_compute[256151]: 2025-12-01 10:24:44.031 256155 DEBUG nova.compute.manager [req-a527940b-e506-4836-977a-ff95248bc1cf req-44d20434-370e-462f-a5c7-980ed9e61ec0 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Received event network-vif-plugged-80410344-d9b7-4cc9-a8bc-ee566d46d0e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:24:44 np0005540825 nova_compute[256151]: 2025-12-01 10:24:44.032 256155 DEBUG oslo_concurrency.lockutils [req-a527940b-e506-4836-977a-ff95248bc1cf req-44d20434-370e-462f-a5c7-980ed9e61ec0 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "dd56af67-ae91-4891-b152-ac9a0f325fc5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:24:44 np0005540825 nova_compute[256151]: 2025-12-01 10:24:44.032 256155 DEBUG oslo_concurrency.lockutils [req-a527940b-e506-4836-977a-ff95248bc1cf req-44d20434-370e-462f-a5c7-980ed9e61ec0 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "dd56af67-ae91-4891-b152-ac9a0f325fc5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:24:44 np0005540825 nova_compute[256151]: 2025-12-01 10:24:44.033 256155 DEBUG oslo_concurrency.lockutils [req-a527940b-e506-4836-977a-ff95248bc1cf req-44d20434-370e-462f-a5c7-980ed9e61ec0 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "dd56af67-ae91-4891-b152-ac9a0f325fc5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:24:44 np0005540825 nova_compute[256151]: 2025-12-01 10:24:44.033 256155 DEBUG nova.compute.manager [req-a527940b-e506-4836-977a-ff95248bc1cf req-44d20434-370e-462f-a5c7-980ed9e61ec0 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Processing event network-vif-plugged-80410344-d9b7-4cc9-a8bc-ee566d46d0e4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 05:24:44 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:44.038 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[d1f0384c-44ce-4c41-8e08-fd0793930506]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:24:44 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:44.120 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[84a2ed46-eef8-4c88-9d66-8f00aaff7dec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:24:44 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:44.121 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82ec8f83-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:24:44 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:44.121 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 05:24:44 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:44.122 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap82ec8f83-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:24:44 np0005540825 NetworkManager[48963]: <info>  [1764584684.1564] manager: (tap82ec8f83-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Dec  1 05:24:44 np0005540825 kernel: tap82ec8f83-60: entered promiscuous mode
Dec  1 05:24:44 np0005540825 nova_compute[256151]: 2025-12-01 10:24:44.155 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:44 np0005540825 nova_compute[256151]: 2025-12-01 10:24:44.159 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:44 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:44.160 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap82ec8f83-60, col_values=(('external_ids', {'iface-id': '0873d4b6-d57f-4e35-9752-e86556fac481'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:24:44 np0005540825 ovn_controller[153404]: 2025-12-01T10:24:44Z|00083|binding|INFO|Releasing lport 0873d4b6-d57f-4e35-9752-e86556fac481 from this chassis (sb_readonly=0)
Dec  1 05:24:44 np0005540825 nova_compute[256151]: 2025-12-01 10:24:44.161 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:44 np0005540825 nova_compute[256151]: 2025-12-01 10:24:44.185 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:44 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:44.187 163291 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/82ec8f83-684f-44ae-8389-122bf8ed45ab.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/82ec8f83-684f-44ae-8389-122bf8ed45ab.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  1 05:24:44 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:44.188 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[8632812d-f04f-4761-8c8f-cf4588faa0ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:24:44 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:44.189 163291 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  1 05:24:44 np0005540825 ovn_metadata_agent[163286]: global
Dec  1 05:24:44 np0005540825 ovn_metadata_agent[163286]:    log         /dev/log local0 debug
Dec  1 05:24:44 np0005540825 ovn_metadata_agent[163286]:    log-tag     haproxy-metadata-proxy-82ec8f83-684f-44ae-8389-122bf8ed45ab
Dec  1 05:24:44 np0005540825 ovn_metadata_agent[163286]:    user        root
Dec  1 05:24:44 np0005540825 ovn_metadata_agent[163286]:    group       root
Dec  1 05:24:44 np0005540825 ovn_metadata_agent[163286]:    maxconn     1024
Dec  1 05:24:44 np0005540825 ovn_metadata_agent[163286]:    pidfile     /var/lib/neutron/external/pids/82ec8f83-684f-44ae-8389-122bf8ed45ab.pid.haproxy
Dec  1 05:24:44 np0005540825 ovn_metadata_agent[163286]:    daemon
Dec  1 05:24:44 np0005540825 ovn_metadata_agent[163286]: 
Dec  1 05:24:44 np0005540825 ovn_metadata_agent[163286]: defaults
Dec  1 05:24:44 np0005540825 ovn_metadata_agent[163286]:    log global
Dec  1 05:24:44 np0005540825 ovn_metadata_agent[163286]:    mode http
Dec  1 05:24:44 np0005540825 ovn_metadata_agent[163286]:    option httplog
Dec  1 05:24:44 np0005540825 ovn_metadata_agent[163286]:    option dontlognull
Dec  1 05:24:44 np0005540825 ovn_metadata_agent[163286]:    option http-server-close
Dec  1 05:24:44 np0005540825 ovn_metadata_agent[163286]:    option forwardfor
Dec  1 05:24:44 np0005540825 ovn_metadata_agent[163286]:    retries                 3
Dec  1 05:24:44 np0005540825 ovn_metadata_agent[163286]:    timeout http-request    30s
Dec  1 05:24:44 np0005540825 ovn_metadata_agent[163286]:    timeout connect         30s
Dec  1 05:24:44 np0005540825 ovn_metadata_agent[163286]:    timeout client          32s
Dec  1 05:24:44 np0005540825 ovn_metadata_agent[163286]:    timeout server          32s
Dec  1 05:24:44 np0005540825 ovn_metadata_agent[163286]:    timeout http-keep-alive 30s
Dec  1 05:24:44 np0005540825 ovn_metadata_agent[163286]: 
Dec  1 05:24:44 np0005540825 ovn_metadata_agent[163286]: 
Dec  1 05:24:44 np0005540825 ovn_metadata_agent[163286]: listen listener
Dec  1 05:24:44 np0005540825 ovn_metadata_agent[163286]:    bind 169.254.169.254:80
Dec  1 05:24:44 np0005540825 ovn_metadata_agent[163286]:    server metadata /var/lib/neutron/metadata_proxy
Dec  1 05:24:44 np0005540825 ovn_metadata_agent[163286]:    http-request add-header X-OVN-Network-ID 82ec8f83-684f-44ae-8389-122bf8ed45ab
Dec  1 05:24:44 np0005540825 ovn_metadata_agent[163286]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  1 05:24:44 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:24:44.190 163291 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-82ec8f83-684f-44ae-8389-122bf8ed45ab', 'env', 'PROCESS_TAG=haproxy-82ec8f83-684f-44ae-8389-122bf8ed45ab', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/82ec8f83-684f-44ae-8389-122bf8ed45ab.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  1 05:24:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:24:44.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:24:44.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:44 np0005540825 nova_compute[256151]: 2025-12-01 10:24:44.595 256155 DEBUG nova.virt.driver [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] Emitting event <LifecycleEvent: 1764584684.5951307, dd56af67-ae91-4891-b152-ac9a0f325fc5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 05:24:44 np0005540825 nova_compute[256151]: 2025-12-01 10:24:44.596 256155 INFO nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] VM Started (Lifecycle Event)#033[00m
Dec  1 05:24:44 np0005540825 nova_compute[256151]: 2025-12-01 10:24:44.600 256155 DEBUG nova.compute.manager [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 05:24:44 np0005540825 nova_compute[256151]: 2025-12-01 10:24:44.609 256155 DEBUG nova.virt.libvirt.driver [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 05:24:44 np0005540825 nova_compute[256151]: 2025-12-01 10:24:44.617 256155 INFO nova.virt.libvirt.driver [-] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Instance spawned successfully.#033[00m
Dec  1 05:24:44 np0005540825 nova_compute[256151]: 2025-12-01 10:24:44.618 256155 DEBUG nova.virt.libvirt.driver [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 05:24:44 np0005540825 nova_compute[256151]: 2025-12-01 10:24:44.622 256155 DEBUG nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 05:24:44 np0005540825 nova_compute[256151]: 2025-12-01 10:24:44.627 256155 DEBUG nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 05:24:44 np0005540825 nova_compute[256151]: 2025-12-01 10:24:44.645 256155 DEBUG nova.virt.libvirt.driver [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 05:24:44 np0005540825 nova_compute[256151]: 2025-12-01 10:24:44.646 256155 DEBUG nova.virt.libvirt.driver [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 05:24:44 np0005540825 nova_compute[256151]: 2025-12-01 10:24:44.647 256155 DEBUG nova.virt.libvirt.driver [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 05:24:44 np0005540825 nova_compute[256151]: 2025-12-01 10:24:44.648 256155 DEBUG nova.virt.libvirt.driver [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 05:24:44 np0005540825 nova_compute[256151]: 2025-12-01 10:24:44.648 256155 DEBUG nova.virt.libvirt.driver [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 05:24:44 np0005540825 nova_compute[256151]: 2025-12-01 10:24:44.649 256155 DEBUG nova.virt.libvirt.driver [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 05:24:44 np0005540825 nova_compute[256151]: 2025-12-01 10:24:44.654 256155 INFO nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 05:24:44 np0005540825 nova_compute[256151]: 2025-12-01 10:24:44.655 256155 DEBUG nova.virt.driver [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] Emitting event <LifecycleEvent: 1764584684.5955193, dd56af67-ae91-4891-b152-ac9a0f325fc5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 05:24:44 np0005540825 nova_compute[256151]: 2025-12-01 10:24:44.655 256155 INFO nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] VM Paused (Lifecycle Event)#033[00m
Dec  1 05:24:44 np0005540825 podman[276678]: 2025-12-01 10:24:44.667699432 +0000 UTC m=+0.084997465 container create eadf37a3ab332a08e31ae832dd169ad711e64d25765d40eea90159f455b1fb40 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82ec8f83-684f-44ae-8389-122bf8ed45ab, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  1 05:24:44 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1024: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  1 05:24:44 np0005540825 nova_compute[256151]: 2025-12-01 10:24:44.687 256155 DEBUG nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 05:24:44 np0005540825 nova_compute[256151]: 2025-12-01 10:24:44.692 256155 DEBUG nova.virt.driver [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] Emitting event <LifecycleEvent: 1764584684.6084986, dd56af67-ae91-4891-b152-ac9a0f325fc5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 05:24:44 np0005540825 nova_compute[256151]: 2025-12-01 10:24:44.692 256155 INFO nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] VM Resumed (Lifecycle Event)#033[00m
Dec  1 05:24:44 np0005540825 podman[276678]: 2025-12-01 10:24:44.627607045 +0000 UTC m=+0.044905158 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 05:24:44 np0005540825 nova_compute[256151]: 2025-12-01 10:24:44.716 256155 INFO nova.compute.manager [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Took 7.52 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 05:24:44 np0005540825 nova_compute[256151]: 2025-12-01 10:24:44.717 256155 DEBUG nova.compute.manager [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 05:24:44 np0005540825 nova_compute[256151]: 2025-12-01 10:24:44.719 256155 DEBUG nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 05:24:44 np0005540825 systemd[1]: Started libpod-conmon-eadf37a3ab332a08e31ae832dd169ad711e64d25765d40eea90159f455b1fb40.scope.
Dec  1 05:24:44 np0005540825 nova_compute[256151]: 2025-12-01 10:24:44.730 256155 DEBUG nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 05:24:44 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:24:44 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9374c3da3a895811f94c67424462b6a6a3148c1be79e842f5edcc9e7505b6bde/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  1 05:24:44 np0005540825 nova_compute[256151]: 2025-12-01 10:24:44.761 256155 INFO nova.compute.manager [None req-ce70962b-e57f-41e7-b539-a8a0d5c4ea6e - - - - - -] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 05:24:44 np0005540825 podman[276678]: 2025-12-01 10:24:44.783942106 +0000 UTC m=+0.201240139 container init eadf37a3ab332a08e31ae832dd169ad711e64d25765d40eea90159f455b1fb40 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82ec8f83-684f-44ae-8389-122bf8ed45ab, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  1 05:24:44 np0005540825 nova_compute[256151]: 2025-12-01 10:24:44.791 256155 INFO nova.compute.manager [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Took 8.55 seconds to build instance.#033[00m
Dec  1 05:24:44 np0005540825 podman[276678]: 2025-12-01 10:24:44.79341122 +0000 UTC m=+0.210709243 container start eadf37a3ab332a08e31ae832dd169ad711e64d25765d40eea90159f455b1fb40 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82ec8f83-684f-44ae-8389-122bf8ed45ab, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125)
Dec  1 05:24:44 np0005540825 nova_compute[256151]: 2025-12-01 10:24:44.807 256155 DEBUG oslo_concurrency.lockutils [None req-ead1a081-149b-4285-aef6-22b4de9ea676 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "dd56af67-ae91-4891-b152-ac9a0f325fc5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:24:44 np0005540825 neutron-haproxy-ovnmeta-82ec8f83-684f-44ae-8389-122bf8ed45ab[276693]: [NOTICE]   (276697) : New worker (276699) forked
Dec  1 05:24:44 np0005540825 neutron-haproxy-ovnmeta-82ec8f83-684f-44ae-8389-122bf8ed45ab[276693]: [NOTICE]   (276697) : Loading success.
Dec  1 05:24:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:24:46 np0005540825 nova_compute[256151]: 2025-12-01 10:24:46.115 256155 DEBUG nova.compute.manager [req-f6aeda60-a66c-43be-b6fe-bf6472952b38 req-65a14799-e121-45a8-b21a-21484a7aa0b5 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Received event network-vif-plugged-80410344-d9b7-4cc9-a8bc-ee566d46d0e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:24:46 np0005540825 nova_compute[256151]: 2025-12-01 10:24:46.115 256155 DEBUG oslo_concurrency.lockutils [req-f6aeda60-a66c-43be-b6fe-bf6472952b38 req-65a14799-e121-45a8-b21a-21484a7aa0b5 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "dd56af67-ae91-4891-b152-ac9a0f325fc5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:24:46 np0005540825 nova_compute[256151]: 2025-12-01 10:24:46.115 256155 DEBUG oslo_concurrency.lockutils [req-f6aeda60-a66c-43be-b6fe-bf6472952b38 req-65a14799-e121-45a8-b21a-21484a7aa0b5 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "dd56af67-ae91-4891-b152-ac9a0f325fc5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:24:46 np0005540825 nova_compute[256151]: 2025-12-01 10:24:46.116 256155 DEBUG oslo_concurrency.lockutils [req-f6aeda60-a66c-43be-b6fe-bf6472952b38 req-65a14799-e121-45a8-b21a-21484a7aa0b5 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "dd56af67-ae91-4891-b152-ac9a0f325fc5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:24:46 np0005540825 nova_compute[256151]: 2025-12-01 10:24:46.116 256155 DEBUG nova.compute.manager [req-f6aeda60-a66c-43be-b6fe-bf6472952b38 req-65a14799-e121-45a8-b21a-21484a7aa0b5 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] No waiting events found dispatching network-vif-plugged-80410344-d9b7-4cc9-a8bc-ee566d46d0e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 05:24:46 np0005540825 nova_compute[256151]: 2025-12-01 10:24:46.116 256155 WARNING nova.compute.manager [req-f6aeda60-a66c-43be-b6fe-bf6472952b38 req-65a14799-e121-45a8-b21a-21484a7aa0b5 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Received unexpected event network-vif-plugged-80410344-d9b7-4cc9-a8bc-ee566d46d0e4 for instance with vm_state active and task_state None.#033[00m
Dec  1 05:24:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:24:46.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:24:46.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:46 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1025: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 720 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Dec  1 05:24:46 np0005540825 nova_compute[256151]: 2025-12-01 10:24:46.915 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:24:47.259Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:24:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:24:47.260Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:24:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:24:47.260Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:24:47 np0005540825 nova_compute[256151]: 2025-12-01 10:24:47.321 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:48 np0005540825 nova_compute[256151]: 2025-12-01 10:24:48.026 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:24:48 np0005540825 nova_compute[256151]: 2025-12-01 10:24:48.027 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  1 05:24:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:24:48.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:24:48.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:48 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1026: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 720 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Dec  1 05:24:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:24:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:24:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:24:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:49 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:24:49 np0005540825 podman[276737]: 2025-12-01 10:24:49.264294486 +0000 UTC m=+0.125398861 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Dec  1 05:24:50 np0005540825 ovn_controller[153404]: 2025-12-01T10:24:50Z|00084|binding|INFO|Releasing lport 0873d4b6-d57f-4e35-9752-e86556fac481 from this chassis (sb_readonly=0)
Dec  1 05:24:50 np0005540825 NetworkManager[48963]: <info>  [1764584690.3305] manager: (patch-provnet-da274a4a-a49c-4f01-b728-391696cd2672-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Dec  1 05:24:50 np0005540825 nova_compute[256151]: 2025-12-01 10:24:50.330 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:50 np0005540825 NetworkManager[48963]: <info>  [1764584690.3327] manager: (patch-br-int-to-provnet-da274a4a-a49c-4f01-b728-391696cd2672): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Dec  1 05:24:50 np0005540825 ovn_controller[153404]: 2025-12-01T10:24:50Z|00085|binding|INFO|Releasing lport 0873d4b6-d57f-4e35-9752-e86556fac481 from this chassis (sb_readonly=0)
Dec  1 05:24:50 np0005540825 nova_compute[256151]: 2025-12-01 10:24:50.337 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:24:50.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:24:50.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:24:50 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1027: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec  1 05:24:50 np0005540825 nova_compute[256151]: 2025-12-01 10:24:50.730 256155 DEBUG nova.compute.manager [req-1334b484-9d37-4606-b0d8-995cb4a7e844 req-ab6ebd9a-148b-4187-b251-64bd1fcfba37 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Received event network-changed-80410344-d9b7-4cc9-a8bc-ee566d46d0e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:24:50 np0005540825 nova_compute[256151]: 2025-12-01 10:24:50.731 256155 DEBUG nova.compute.manager [req-1334b484-9d37-4606-b0d8-995cb4a7e844 req-ab6ebd9a-148b-4187-b251-64bd1fcfba37 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Refreshing instance network info cache due to event network-changed-80410344-d9b7-4cc9-a8bc-ee566d46d0e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 05:24:50 np0005540825 nova_compute[256151]: 2025-12-01 10:24:50.731 256155 DEBUG oslo_concurrency.lockutils [req-1334b484-9d37-4606-b0d8-995cb4a7e844 req-ab6ebd9a-148b-4187-b251-64bd1fcfba37 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "refresh_cache-dd56af67-ae91-4891-b152-ac9a0f325fc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 05:24:50 np0005540825 nova_compute[256151]: 2025-12-01 10:24:50.732 256155 DEBUG oslo_concurrency.lockutils [req-1334b484-9d37-4606-b0d8-995cb4a7e844 req-ab6ebd9a-148b-4187-b251-64bd1fcfba37 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquired lock "refresh_cache-dd56af67-ae91-4891-b152-ac9a0f325fc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 05:24:50 np0005540825 nova_compute[256151]: 2025-12-01 10:24:50.732 256155 DEBUG nova.network.neutron [req-1334b484-9d37-4606-b0d8-995cb4a7e844 req-ab6ebd9a-148b-4187-b251-64bd1fcfba37 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Refreshing network info cache for port 80410344-d9b7-4cc9-a8bc-ee566d46d0e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 05:24:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=infra.usagestats t=2025-12-01T10:24:51.230663764Z level=info msg="Usage stats are ready to report"
Dec  1 05:24:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:24:51] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Dec  1 05:24:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:24:51] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Dec  1 05:24:51 np0005540825 nova_compute[256151]: 2025-12-01 10:24:51.917 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:52 np0005540825 nova_compute[256151]: 2025-12-01 10:24:52.322 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:24:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:24:52.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:24:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:24:52.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:52 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1028: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec  1 05:24:53 np0005540825 nova_compute[256151]: 2025-12-01 10:24:53.064 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:24:53 np0005540825 nova_compute[256151]: 2025-12-01 10:24:53.065 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 05:24:53 np0005540825 nova_compute[256151]: 2025-12-01 10:24:53.065 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 05:24:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:24:53.705Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:24:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:24:53.706Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:24:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:24:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:54 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:24:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:54 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:24:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:54 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:24:54 np0005540825 nova_compute[256151]: 2025-12-01 10:24:54.141 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "refresh_cache-dd56af67-ae91-4891-b152-ac9a0f325fc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 05:24:54 np0005540825 nova_compute[256151]: 2025-12-01 10:24:54.155 256155 DEBUG nova.network.neutron [req-1334b484-9d37-4606-b0d8-995cb4a7e844 req-ab6ebd9a-148b-4187-b251-64bd1fcfba37 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Updated VIF entry in instance network info cache for port 80410344-d9b7-4cc9-a8bc-ee566d46d0e4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 05:24:54 np0005540825 nova_compute[256151]: 2025-12-01 10:24:54.155 256155 DEBUG nova.network.neutron [req-1334b484-9d37-4606-b0d8-995cb4a7e844 req-ab6ebd9a-148b-4187-b251-64bd1fcfba37 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Updating instance_info_cache with network_info: [{"id": "80410344-d9b7-4cc9-a8bc-ee566d46d0e4", "address": "fa:16:3e:bd:ef:f0", "network": {"id": "82ec8f83-684f-44ae-8389-122bf8ed45ab", "bridge": "br-int", "label": "tempest-network-smoke--115101625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80410344-d9", "ovs_interfaceid": "80410344-d9b7-4cc9-a8bc-ee566d46d0e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 05:24:54 np0005540825 nova_compute[256151]: 2025-12-01 10:24:54.189 256155 DEBUG oslo_concurrency.lockutils [req-1334b484-9d37-4606-b0d8-995cb4a7e844 req-ab6ebd9a-148b-4187-b251-64bd1fcfba37 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Releasing lock "refresh_cache-dd56af67-ae91-4891-b152-ac9a0f325fc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 05:24:54 np0005540825 nova_compute[256151]: 2025-12-01 10:24:54.191 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquired lock "refresh_cache-dd56af67-ae91-4891-b152-ac9a0f325fc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 05:24:54 np0005540825 nova_compute[256151]: 2025-12-01 10:24:54.191 256155 DEBUG nova.network.neutron [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 05:24:54 np0005540825 nova_compute[256151]: 2025-12-01 10:24:54.191 256155 DEBUG nova.objects.instance [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lazy-loading 'info_cache' on Instance uuid dd56af67-ae91-4891-b152-ac9a0f325fc5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 05:24:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:24:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:24:54.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:24:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:24:54.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:24:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:24:54 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1029: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec  1 05:24:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:24:55 np0005540825 nova_compute[256151]: 2025-12-01 10:24:55.890 256155 DEBUG nova.network.neutron [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Updating instance_info_cache with network_info: [{"id": "80410344-d9b7-4cc9-a8bc-ee566d46d0e4", "address": "fa:16:3e:bd:ef:f0", "network": {"id": "82ec8f83-684f-44ae-8389-122bf8ed45ab", "bridge": "br-int", "label": "tempest-network-smoke--115101625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80410344-d9", "ovs_interfaceid": "80410344-d9b7-4cc9-a8bc-ee566d46d0e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 05:24:55 np0005540825 nova_compute[256151]: 2025-12-01 10:24:55.905 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Releasing lock "refresh_cache-dd56af67-ae91-4891-b152-ac9a0f325fc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 05:24:55 np0005540825 nova_compute[256151]: 2025-12-01 10:24:55.905 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 05:24:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:24:56.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:24:56.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:56 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1030: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec  1 05:24:56 np0005540825 nova_compute[256151]: 2025-12-01 10:24:56.919 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:57 np0005540825 nova_compute[256151]: 2025-12-01 10:24:57.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:24:57 np0005540825 nova_compute[256151]: 2025-12-01 10:24:57.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:24:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:24:57.262Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:24:57 np0005540825 nova_compute[256151]: 2025-12-01 10:24:57.325 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:24:58 np0005540825 nova_compute[256151]: 2025-12-01 10:24:58.026 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:24:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:24:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:24:58.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:24:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:24:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:24:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:24:58.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:24:58 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1031: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 41 op/s
Dec  1 05:24:58 np0005540825 ceph-mgr[74709]: [dashboard INFO request] [192.168.122.100:58714] [POST] [200] [0.003s] [4.0B] [0e60cd5d-ba00-43e4-9cdc-d25025d548d0] /api/prometheus_receiver
Dec  1 05:24:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:24:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:24:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:59 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:24:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:24:59 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:24:59 np0005540825 nova_compute[256151]: 2025-12-01 10:24:59.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:24:59 np0005540825 ovn_controller[153404]: 2025-12-01T10:24:59Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bd:ef:f0 10.100.0.11
Dec  1 05:24:59 np0005540825 ovn_controller[153404]: 2025-12-01T10:24:59Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bd:ef:f0 10.100.0.11
Dec  1 05:25:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:25:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:25:00.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:25:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:25:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:25:00.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:25:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:25:00 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1032: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 131 op/s
Dec  1 05:25:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:25:01] "GET /metrics HTTP/1.1" 200 48559 "" "Prometheus/2.51.0"
Dec  1 05:25:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:25:01] "GET /metrics HTTP/1.1" 200 48559 "" "Prometheus/2.51.0"
Dec  1 05:25:01 np0005540825 nova_compute[256151]: 2025-12-01 10:25:01.962 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:25:02 np0005540825 nova_compute[256151]: 2025-12-01 10:25:02.328 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:25:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:25:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:25:02.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:25:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:25:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:25:02.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:25:02 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1033: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Dec  1 05:25:03 np0005540825 nova_compute[256151]: 2025-12-01 10:25:03.026 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:25:03 np0005540825 nova_compute[256151]: 2025-12-01 10:25:03.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:25:03 np0005540825 nova_compute[256151]: 2025-12-01 10:25:03.027 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 05:25:03 np0005540825 nova_compute[256151]: 2025-12-01 10:25:03.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:25:03 np0005540825 nova_compute[256151]: 2025-12-01 10:25:03.051 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:25:03 np0005540825 nova_compute[256151]: 2025-12-01 10:25:03.051 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:25:03 np0005540825 nova_compute[256151]: 2025-12-01 10:25:03.051 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:25:03 np0005540825 nova_compute[256151]: 2025-12-01 10:25:03.052 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 05:25:03 np0005540825 nova_compute[256151]: 2025-12-01 10:25:03.052 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:25:03 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:25:03 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2593606693' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:25:03 np0005540825 podman[276822]: 2025-12-01 10:25:03.615856728 +0000 UTC m=+0.089945008 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  1 05:25:03 np0005540825 nova_compute[256151]: 2025-12-01 10:25:03.616 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:25:03 np0005540825 nova_compute[256151]: 2025-12-01 10:25:03.699 256155 DEBUG nova.virt.libvirt.driver [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  1 05:25:03 np0005540825 nova_compute[256151]: 2025-12-01 10:25:03.699 256155 DEBUG nova.virt.libvirt.driver [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  1 05:25:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:25:03.706Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:25:03 np0005540825 nova_compute[256151]: 2025-12-01 10:25:03.925 256155 WARNING nova.virt.libvirt.driver [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 05:25:03 np0005540825 nova_compute[256151]: 2025-12-01 10:25:03.926 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4376MB free_disk=59.92213439941406GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 05:25:03 np0005540825 nova_compute[256151]: 2025-12-01 10:25:03.926 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:25:03 np0005540825 nova_compute[256151]: 2025-12-01 10:25:03.926 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:25:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:25:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:25:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:25:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:04 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:25:04 np0005540825 nova_compute[256151]: 2025-12-01 10:25:04.047 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Instance dd56af67-ae91-4891-b152-ac9a0f325fc5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 05:25:04 np0005540825 nova_compute[256151]: 2025-12-01 10:25:04.047 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 05:25:04 np0005540825 nova_compute[256151]: 2025-12-01 10:25:04.048 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 05:25:04 np0005540825 nova_compute[256151]: 2025-12-01 10:25:04.147 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:25:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:25:04 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:25:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 05:25:04 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:25:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 05:25:04 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1034: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 354 KiB/s rd, 4.0 MiB/s wr, 94 op/s
Dec  1 05:25:04 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:25:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 05:25:04 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:25:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 05:25:04 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 05:25:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 05:25:04 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:25:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:25:04 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:25:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:25:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:25:04.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:25:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:25:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:25:04.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:25:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:25:04.582 163291 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:25:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:25:04.583 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:25:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:25:04.584 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:25:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:25:04 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3837439768' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:25:04 np0005540825 nova_compute[256151]: 2025-12-01 10:25:04.696 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:25:04 np0005540825 nova_compute[256151]: 2025-12-01 10:25:04.704 256155 DEBUG nova.compute.provider_tree [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed in ProviderTree for provider: 5efe20fe-1981-4bd9-8786-d9fddc89a5ae update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 05:25:04 np0005540825 nova_compute[256151]: 2025-12-01 10:25:04.717 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 05:25:04 np0005540825 nova_compute[256151]: 2025-12-01 10:25:04.737 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 05:25:04 np0005540825 nova_compute[256151]: 2025-12-01 10:25:04.737 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:25:04 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:25:04 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:25:04 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:25:04 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:25:05 np0005540825 podman[277015]: 2025-12-01 10:25:05.025877956 +0000 UTC m=+0.059238172 container create be5781c870423bd8622a039cd626f53ae7fff25ec60e0eeb882df4a8a509ac83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_napier, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:25:05 np0005540825 systemd[1]: Started libpod-conmon-be5781c870423bd8622a039cd626f53ae7fff25ec60e0eeb882df4a8a509ac83.scope.
Dec  1 05:25:05 np0005540825 podman[277015]: 2025-12-01 10:25:04.994533094 +0000 UTC m=+0.027893370 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:25:05 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:25:05 np0005540825 podman[277015]: 2025-12-01 10:25:05.12650868 +0000 UTC m=+0.159868966 container init be5781c870423bd8622a039cd626f53ae7fff25ec60e0eeb882df4a8a509ac83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_napier, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  1 05:25:05 np0005540825 podman[277015]: 2025-12-01 10:25:05.135958674 +0000 UTC m=+0.169318860 container start be5781c870423bd8622a039cd626f53ae7fff25ec60e0eeb882df4a8a509ac83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:25:05 np0005540825 podman[277015]: 2025-12-01 10:25:05.141125243 +0000 UTC m=+0.174485479 container attach be5781c870423bd8622a039cd626f53ae7fff25ec60e0eeb882df4a8a509ac83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_napier, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec  1 05:25:05 np0005540825 heuristic_napier[277032]: 167 167
Dec  1 05:25:05 np0005540825 systemd[1]: libpod-be5781c870423bd8622a039cd626f53ae7fff25ec60e0eeb882df4a8a509ac83.scope: Deactivated successfully.
Dec  1 05:25:05 np0005540825 podman[277015]: 2025-12-01 10:25:05.14472465 +0000 UTC m=+0.178084876 container died be5781c870423bd8622a039cd626f53ae7fff25ec60e0eeb882df4a8a509ac83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  1 05:25:05 np0005540825 systemd[1]: var-lib-containers-storage-overlay-a0c5a44f2d1ed3831e1b2e061c3505955721148d1e99d0546935acbe3c2da96b-merged.mount: Deactivated successfully.
Dec  1 05:25:05 np0005540825 podman[277015]: 2025-12-01 10:25:05.19237132 +0000 UTC m=+0.225731506 container remove be5781c870423bd8622a039cd626f53ae7fff25ec60e0eeb882df4a8a509ac83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_napier, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:25:05 np0005540825 systemd[1]: libpod-conmon-be5781c870423bd8622a039cd626f53ae7fff25ec60e0eeb882df4a8a509ac83.scope: Deactivated successfully.
Dec  1 05:25:05 np0005540825 podman[277055]: 2025-12-01 10:25:05.467948715 +0000 UTC m=+0.075265873 container create 3b8e6574389c30c534bde4c0c0d5abadf405500aba7ab9a2bf3592a867043b9b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec  1 05:25:05 np0005540825 systemd[1]: Started libpod-conmon-3b8e6574389c30c534bde4c0c0d5abadf405500aba7ab9a2bf3592a867043b9b.scope.
Dec  1 05:25:05 np0005540825 podman[277055]: 2025-12-01 10:25:05.436385477 +0000 UTC m=+0.043702675 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:25:05 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:25:05 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b076354268cdf29fb47757819abcfc7ad3d315ccda59cbbd7165b84dbc9fce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:25:05 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b076354268cdf29fb47757819abcfc7ad3d315ccda59cbbd7165b84dbc9fce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:25:05 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b076354268cdf29fb47757819abcfc7ad3d315ccda59cbbd7165b84dbc9fce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:25:05 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b076354268cdf29fb47757819abcfc7ad3d315ccda59cbbd7165b84dbc9fce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:25:05 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b076354268cdf29fb47757819abcfc7ad3d315ccda59cbbd7165b84dbc9fce/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:25:05 np0005540825 podman[277055]: 2025-12-01 10:25:05.574377185 +0000 UTC m=+0.181694323 container init 3b8e6574389c30c534bde4c0c0d5abadf405500aba7ab9a2bf3592a867043b9b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_aryabhata, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  1 05:25:05 np0005540825 podman[277055]: 2025-12-01 10:25:05.593971411 +0000 UTC m=+0.201288539 container start 3b8e6574389c30c534bde4c0c0d5abadf405500aba7ab9a2bf3592a867043b9b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_aryabhata, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:25:05 np0005540825 podman[277055]: 2025-12-01 10:25:05.598570105 +0000 UTC m=+0.205887323 container attach 3b8e6574389c30c534bde4c0c0d5abadf405500aba7ab9a2bf3592a867043b9b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_aryabhata, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:25:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:25:05 np0005540825 sharp_aryabhata[277072]: --> passed data devices: 0 physical, 1 LVM
Dec  1 05:25:05 np0005540825 sharp_aryabhata[277072]: --> All data devices are unavailable
Dec  1 05:25:05 np0005540825 systemd[1]: libpod-3b8e6574389c30c534bde4c0c0d5abadf405500aba7ab9a2bf3592a867043b9b.scope: Deactivated successfully.
Dec  1 05:25:05 np0005540825 podman[277089]: 2025-12-01 10:25:05.981360941 +0000 UTC m=+0.026384640 container died 3b8e6574389c30c534bde4c0c0d5abadf405500aba7ab9a2bf3592a867043b9b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_aryabhata, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:25:06 np0005540825 systemd[1]: var-lib-containers-storage-overlay-e8b076354268cdf29fb47757819abcfc7ad3d315ccda59cbbd7165b84dbc9fce-merged.mount: Deactivated successfully.
Dec  1 05:25:06 np0005540825 podman[277089]: 2025-12-01 10:25:06.030041979 +0000 UTC m=+0.075065658 container remove 3b8e6574389c30c534bde4c0c0d5abadf405500aba7ab9a2bf3592a867043b9b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_aryabhata, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec  1 05:25:06 np0005540825 systemd[1]: libpod-conmon-3b8e6574389c30c534bde4c0c0d5abadf405500aba7ab9a2bf3592a867043b9b.scope: Deactivated successfully.
Dec  1 05:25:06 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1035: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 354 KiB/s rd, 4.0 MiB/s wr, 95 op/s
Dec  1 05:25:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:25:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:25:06.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:25:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:25:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:25:06.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:25:06 np0005540825 podman[277219]: 2025-12-01 10:25:06.846706413 +0000 UTC m=+0.064062533 container create f54f6c01f6884001912fa88b196fea16a215103c0a30eb4d02228c3500e5a00e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_villani, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:25:06 np0005540825 systemd[1]: Started libpod-conmon-f54f6c01f6884001912fa88b196fea16a215103c0a30eb4d02228c3500e5a00e.scope.
Dec  1 05:25:06 np0005540825 podman[277219]: 2025-12-01 10:25:06.813042648 +0000 UTC m=+0.030398828 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:25:06 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:25:06 np0005540825 podman[277219]: 2025-12-01 10:25:06.950263175 +0000 UTC m=+0.167619355 container init f54f6c01f6884001912fa88b196fea16a215103c0a30eb4d02228c3500e5a00e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_villani, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  1 05:25:06 np0005540825 podman[277219]: 2025-12-01 10:25:06.961403315 +0000 UTC m=+0.178759405 container start f54f6c01f6884001912fa88b196fea16a215103c0a30eb4d02228c3500e5a00e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_villani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  1 05:25:06 np0005540825 nova_compute[256151]: 2025-12-01 10:25:06.964 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:25:06 np0005540825 podman[277219]: 2025-12-01 10:25:06.968098355 +0000 UTC m=+0.185454545 container attach f54f6c01f6884001912fa88b196fea16a215103c0a30eb4d02228c3500e5a00e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  1 05:25:06 np0005540825 sweet_villani[277235]: 167 167
Dec  1 05:25:06 np0005540825 systemd[1]: libpod-f54f6c01f6884001912fa88b196fea16a215103c0a30eb4d02228c3500e5a00e.scope: Deactivated successfully.
Dec  1 05:25:06 np0005540825 podman[277219]: 2025-12-01 10:25:06.971164897 +0000 UTC m=+0.188521077 container died f54f6c01f6884001912fa88b196fea16a215103c0a30eb4d02228c3500e5a00e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_villani, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:25:07 np0005540825 systemd[1]: var-lib-containers-storage-overlay-ddb471ecff5f496ebbc1f94c74a9507863e6533d7d3b7b90ce2539b92e4e2fc1-merged.mount: Deactivated successfully.
Dec  1 05:25:07 np0005540825 podman[277219]: 2025-12-01 10:25:07.024826549 +0000 UTC m=+0.242182679 container remove f54f6c01f6884001912fa88b196fea16a215103c0a30eb4d02228c3500e5a00e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_villani, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:25:07 np0005540825 nova_compute[256151]: 2025-12-01 10:25:07.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:25:07 np0005540825 systemd[1]: libpod-conmon-f54f6c01f6884001912fa88b196fea16a215103c0a30eb4d02228c3500e5a00e.scope: Deactivated successfully.
Dec  1 05:25:07 np0005540825 podman[277260]: 2025-12-01 10:25:07.256428412 +0000 UTC m=+0.073239529 container create b800f5b69edf8669b6e84b908e2be7e576c0d49a290169d38e5c0fb655a5dc0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:25:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:25:07.262Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:25:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:25:07.263Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:25:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:25:07.263Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:25:07 np0005540825 systemd[1]: Started libpod-conmon-b800f5b69edf8669b6e84b908e2be7e576c0d49a290169d38e5c0fb655a5dc0f.scope.
Dec  1 05:25:07 np0005540825 podman[277260]: 2025-12-01 10:25:07.226522039 +0000 UTC m=+0.043333206 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:25:07 np0005540825 nova_compute[256151]: 2025-12-01 10:25:07.329 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:25:07 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:25:07 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ebfbaf9895343b5b6ff91458bf564642689e05dfd5c9d111bc7dfc12768843/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:25:07 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ebfbaf9895343b5b6ff91458bf564642689e05dfd5c9d111bc7dfc12768843/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:25:07 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ebfbaf9895343b5b6ff91458bf564642689e05dfd5c9d111bc7dfc12768843/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:25:07 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ebfbaf9895343b5b6ff91458bf564642689e05dfd5c9d111bc7dfc12768843/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:25:07 np0005540825 podman[277260]: 2025-12-01 10:25:07.383785454 +0000 UTC m=+0.200596621 container init b800f5b69edf8669b6e84b908e2be7e576c0d49a290169d38e5c0fb655a5dc0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_curie, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  1 05:25:07 np0005540825 podman[277260]: 2025-12-01 10:25:07.398946352 +0000 UTC m=+0.215757449 container start b800f5b69edf8669b6e84b908e2be7e576c0d49a290169d38e5c0fb655a5dc0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_curie, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  1 05:25:07 np0005540825 podman[277260]: 2025-12-01 10:25:07.402837756 +0000 UTC m=+0.219648883 container attach b800f5b69edf8669b6e84b908e2be7e576c0d49a290169d38e5c0fb655a5dc0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_curie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:25:07 np0005540825 podman[277274]: 2025-12-01 10:25:07.433998174 +0000 UTC m=+0.124204029 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3)
Dec  1 05:25:07 np0005540825 sharp_curie[277277]: {
Dec  1 05:25:07 np0005540825 sharp_curie[277277]:    "1": [
Dec  1 05:25:07 np0005540825 sharp_curie[277277]:        {
Dec  1 05:25:07 np0005540825 sharp_curie[277277]:            "devices": [
Dec  1 05:25:07 np0005540825 sharp_curie[277277]:                "/dev/loop3"
Dec  1 05:25:07 np0005540825 sharp_curie[277277]:            ],
Dec  1 05:25:07 np0005540825 sharp_curie[277277]:            "lv_name": "ceph_lv0",
Dec  1 05:25:07 np0005540825 sharp_curie[277277]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:25:07 np0005540825 sharp_curie[277277]:            "lv_size": "21470642176",
Dec  1 05:25:07 np0005540825 sharp_curie[277277]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 05:25:07 np0005540825 sharp_curie[277277]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:25:07 np0005540825 sharp_curie[277277]:            "name": "ceph_lv0",
Dec  1 05:25:07 np0005540825 sharp_curie[277277]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:25:07 np0005540825 sharp_curie[277277]:            "tags": {
Dec  1 05:25:07 np0005540825 sharp_curie[277277]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:25:07 np0005540825 sharp_curie[277277]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:25:07 np0005540825 sharp_curie[277277]:                "ceph.cephx_lockbox_secret": "",
Dec  1 05:25:07 np0005540825 sharp_curie[277277]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 05:25:07 np0005540825 sharp_curie[277277]:                "ceph.cluster_name": "ceph",
Dec  1 05:25:07 np0005540825 sharp_curie[277277]:                "ceph.crush_device_class": "",
Dec  1 05:25:07 np0005540825 sharp_curie[277277]:                "ceph.encrypted": "0",
Dec  1 05:25:07 np0005540825 sharp_curie[277277]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 05:25:07 np0005540825 sharp_curie[277277]:                "ceph.osd_id": "1",
Dec  1 05:25:07 np0005540825 sharp_curie[277277]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 05:25:07 np0005540825 sharp_curie[277277]:                "ceph.type": "block",
Dec  1 05:25:07 np0005540825 sharp_curie[277277]:                "ceph.vdo": "0",
Dec  1 05:25:07 np0005540825 sharp_curie[277277]:                "ceph.with_tpm": "0"
Dec  1 05:25:07 np0005540825 sharp_curie[277277]:            },
Dec  1 05:25:07 np0005540825 sharp_curie[277277]:            "type": "block",
Dec  1 05:25:07 np0005540825 sharp_curie[277277]:            "vg_name": "ceph_vg0"
Dec  1 05:25:07 np0005540825 sharp_curie[277277]:        }
Dec  1 05:25:07 np0005540825 sharp_curie[277277]:    ]
Dec  1 05:25:07 np0005540825 sharp_curie[277277]: }
Dec  1 05:25:07 np0005540825 systemd[1]: libpod-b800f5b69edf8669b6e84b908e2be7e576c0d49a290169d38e5c0fb655a5dc0f.scope: Deactivated successfully.
Dec  1 05:25:07 np0005540825 podman[277260]: 2025-12-01 10:25:07.755700968 +0000 UTC m=+0.572512095 container died b800f5b69edf8669b6e84b908e2be7e576c0d49a290169d38e5c0fb655a5dc0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_curie, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:25:07 np0005540825 systemd[1]: var-lib-containers-storage-overlay-69ebfbaf9895343b5b6ff91458bf564642689e05dfd5c9d111bc7dfc12768843-merged.mount: Deactivated successfully.
Dec  1 05:25:07 np0005540825 podman[277260]: 2025-12-01 10:25:07.806931945 +0000 UTC m=+0.623743032 container remove b800f5b69edf8669b6e84b908e2be7e576c0d49a290169d38e5c0fb655a5dc0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_curie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:25:07 np0005540825 systemd[1]: libpod-conmon-b800f5b69edf8669b6e84b908e2be7e576c0d49a290169d38e5c0fb655a5dc0f.scope: Deactivated successfully.
Dec  1 05:25:08 np0005540825 nova_compute[256151]: 2025-12-01 10:25:08.052 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:25:08 np0005540825 nova_compute[256151]: 2025-12-01 10:25:08.053 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  1 05:25:08 np0005540825 nova_compute[256151]: 2025-12-01 10:25:08.077 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  1 05:25:08 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1036: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 352 KiB/s rd, 4.0 MiB/s wr, 94 op/s
Dec  1 05:25:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:25:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:25:08.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:25:08 np0005540825 podman[277411]: 2025-12-01 10:25:08.536868429 +0000 UTC m=+0.049197283 container create 4530155fea57a2d6effbe40cc98f8bbc130e4221ae16f4877647ed96d853736a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_hypatia, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  1 05:25:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:25:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:25:08.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:25:08 np0005540825 systemd[1]: Started libpod-conmon-4530155fea57a2d6effbe40cc98f8bbc130e4221ae16f4877647ed96d853736a.scope.
Dec  1 05:25:08 np0005540825 podman[277411]: 2025-12-01 10:25:08.515182926 +0000 UTC m=+0.027511760 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:25:08 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:25:08 np0005540825 podman[277411]: 2025-12-01 10:25:08.630555876 +0000 UTC m=+0.142884710 container init 4530155fea57a2d6effbe40cc98f8bbc130e4221ae16f4877647ed96d853736a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_hypatia, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:25:08 np0005540825 podman[277411]: 2025-12-01 10:25:08.641550542 +0000 UTC m=+0.153879386 container start 4530155fea57a2d6effbe40cc98f8bbc130e4221ae16f4877647ed96d853736a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  1 05:25:08 np0005540825 podman[277411]: 2025-12-01 10:25:08.645830937 +0000 UTC m=+0.158159771 container attach 4530155fea57a2d6effbe40cc98f8bbc130e4221ae16f4877647ed96d853736a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_hypatia, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:25:08 np0005540825 practical_hypatia[277427]: 167 167
Dec  1 05:25:08 np0005540825 systemd[1]: libpod-4530155fea57a2d6effbe40cc98f8bbc130e4221ae16f4877647ed96d853736a.scope: Deactivated successfully.
Dec  1 05:25:08 np0005540825 podman[277411]: 2025-12-01 10:25:08.649626809 +0000 UTC m=+0.161955623 container died 4530155fea57a2d6effbe40cc98f8bbc130e4221ae16f4877647ed96d853736a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_hypatia, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:25:08 np0005540825 systemd[1]: var-lib-containers-storage-overlay-3380f2f69bc07f39991b69911490ca7ed5cc84157da39e9b9c67e6cee04f58d5-merged.mount: Deactivated successfully.
Dec  1 05:25:08 np0005540825 podman[277411]: 2025-12-01 10:25:08.69767193 +0000 UTC m=+0.210000784 container remove 4530155fea57a2d6effbe40cc98f8bbc130e4221ae16f4877647ed96d853736a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_hypatia, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:25:08 np0005540825 systemd[1]: libpod-conmon-4530155fea57a2d6effbe40cc98f8bbc130e4221ae16f4877647ed96d853736a.scope: Deactivated successfully.
Dec  1 05:25:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:25:08.845Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:25:08 np0005540825 podman[277451]: 2025-12-01 10:25:08.975170336 +0000 UTC m=+0.079029364 container create 986eaf15339c28049692a2ceb458818659dd719cd18beceff61fa453e494504b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:25:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:25:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:25:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:25:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:09 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:25:09 np0005540825 systemd[1]: Started libpod-conmon-986eaf15339c28049692a2ceb458818659dd719cd18beceff61fa453e494504b.scope.
Dec  1 05:25:09 np0005540825 podman[277451]: 2025-12-01 10:25:08.94442287 +0000 UTC m=+0.048281948 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:25:09 np0005540825 nova_compute[256151]: 2025-12-01 10:25:09.052 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:25:09 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:25:09 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0d330b45e370127bdb079ea68035754e6da7d07bc86f364a1a369874bc4e8c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:25:09 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0d330b45e370127bdb079ea68035754e6da7d07bc86f364a1a369874bc4e8c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:25:09 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0d330b45e370127bdb079ea68035754e6da7d07bc86f364a1a369874bc4e8c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:25:09 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0d330b45e370127bdb079ea68035754e6da7d07bc86f364a1a369874bc4e8c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:25:09 np0005540825 podman[277451]: 2025-12-01 10:25:09.081765701 +0000 UTC m=+0.185624729 container init 986eaf15339c28049692a2ceb458818659dd719cd18beceff61fa453e494504b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_shirley, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  1 05:25:09 np0005540825 podman[277451]: 2025-12-01 10:25:09.089034286 +0000 UTC m=+0.192893274 container start 986eaf15339c28049692a2ceb458818659dd719cd18beceff61fa453e494504b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:25:09 np0005540825 podman[277451]: 2025-12-01 10:25:09.092433097 +0000 UTC m=+0.196292125 container attach 986eaf15339c28049692a2ceb458818659dd719cd18beceff61fa453e494504b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_shirley, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:25:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:25:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:25:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:25:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:25:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:25:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:25:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:25:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:25:09 np0005540825 lvm[277544]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 05:25:09 np0005540825 lvm[277544]: VG ceph_vg0 finished
Dec  1 05:25:09 np0005540825 wonderful_shirley[277468]: {}
Dec  1 05:25:09 np0005540825 systemd[1]: libpod-986eaf15339c28049692a2ceb458818659dd719cd18beceff61fa453e494504b.scope: Deactivated successfully.
Dec  1 05:25:09 np0005540825 podman[277451]: 2025-12-01 10:25:09.932490989 +0000 UTC m=+1.036349997 container died 986eaf15339c28049692a2ceb458818659dd719cd18beceff61fa453e494504b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_shirley, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec  1 05:25:09 np0005540825 systemd[1]: libpod-986eaf15339c28049692a2ceb458818659dd719cd18beceff61fa453e494504b.scope: Consumed 1.419s CPU time.
Dec  1 05:25:09 np0005540825 systemd[1]: var-lib-containers-storage-overlay-d0d330b45e370127bdb079ea68035754e6da7d07bc86f364a1a369874bc4e8c5-merged.mount: Deactivated successfully.
Dec  1 05:25:09 np0005540825 podman[277451]: 2025-12-01 10:25:09.972994468 +0000 UTC m=+1.076853456 container remove 986eaf15339c28049692a2ceb458818659dd719cd18beceff61fa453e494504b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_shirley, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:25:09 np0005540825 systemd[1]: libpod-conmon-986eaf15339c28049692a2ceb458818659dd719cd18beceff61fa453e494504b.scope: Deactivated successfully.
Dec  1 05:25:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 05:25:10 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:25:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 05:25:10 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:25:10 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1037: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 352 KiB/s rd, 4.0 MiB/s wr, 94 op/s
Dec  1 05:25:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:25:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:25:10.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:25:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:25:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:25:10.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:25:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:25:11 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:25:11 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:25:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:25:11] "GET /metrics HTTP/1.1" 200 48560 "" "Prometheus/2.51.0"
Dec  1 05:25:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:25:11] "GET /metrics HTTP/1.1" 200 48560 "" "Prometheus/2.51.0"
Dec  1 05:25:12 np0005540825 nova_compute[256151]: 2025-12-01 10:25:12.016 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:25:12 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1038: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 28 KiB/s wr, 77 op/s
Dec  1 05:25:12 np0005540825 nova_compute[256151]: 2025-12-01 10:25:12.331 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:25:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:25:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:25:12.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:25:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:25:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:25:12.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:25:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:25:13.708Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:25:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:25:13.708Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:25:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:25:13.708Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:25:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:25:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:25:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:25:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:14 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:25:14 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1039: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 28 KiB/s wr, 77 op/s
Dec  1 05:25:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:25:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:25:14.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:25:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:25:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:25:14.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:25:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:25:16 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1040: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 75 op/s
Dec  1 05:25:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:25:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:25:16.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:25:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:25:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:25:16.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:25:17 np0005540825 nova_compute[256151]: 2025-12-01 10:25:17.019 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:25:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:25:17.264Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:25:17 np0005540825 nova_compute[256151]: 2025-12-01 10:25:17.333 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:25:18 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1041: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Dec  1 05:25:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:25:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:25:18.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:25:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:25:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:25:18.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:25:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:25:18.847Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:25:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:25:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:25:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:25:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:19 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:25:20 np0005540825 podman[277595]: 2025-12-01 10:25:20.306886525 +0000 UTC m=+0.158285664 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.license=GPLv2)
Dec  1 05:25:20 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1042: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Dec  1 05:25:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:25:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:25:20.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:25:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:25:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:25:20.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:25:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:25:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:25:21] "GET /metrics HTTP/1.1" 200 48560 "" "Prometheus/2.51.0"
Dec  1 05:25:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:25:21] "GET /metrics HTTP/1.1" 200 48560 "" "Prometheus/2.51.0"
Dec  1 05:25:22 np0005540825 nova_compute[256151]: 2025-12-01 10:25:22.021 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:25:22 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1043: 353 pgs: 353 active+clean; 200 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Dec  1 05:25:22 np0005540825 nova_compute[256151]: 2025-12-01 10:25:22.335 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:25:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:25:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:25:22.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:25:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:25:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:25:22.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:25:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:25:23.709Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:25:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:25:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:25:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:25:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:24 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:25:24 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1044: 353 pgs: 353 active+clean; 200 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec  1 05:25:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:25:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:25:24.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:25:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:25:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:25:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:25:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:25:24.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:25:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:25:26 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1045: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec  1 05:25:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:25:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:25:26.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:25:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:25:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:25:26.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:25:27 np0005540825 nova_compute[256151]: 2025-12-01 10:25:27.023 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:25:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:25:27.265Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:25:27 np0005540825 nova_compute[256151]: 2025-12-01 10:25:27.337 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:25:28 np0005540825 nova_compute[256151]: 2025-12-01 10:25:28.017 256155 INFO nova.compute.manager [None req-43f4800c-b5cf-4b11-ab0a-07eeb9096a0d 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Get console output#033[00m
Dec  1 05:25:28 np0005540825 nova_compute[256151]: 2025-12-01 10:25:28.025 262942 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  1 05:25:28 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1046: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec  1 05:25:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:25:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:25:28.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:25:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:25:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:25:28.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:25:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:25:28.847Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:25:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:25:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:25:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:29 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:25:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:29 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:25:29 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:25:29.080 163291 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '36:10:da', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '4e:5c:35:98:90:37'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 05:25:29 np0005540825 nova_compute[256151]: 2025-12-01 10:25:29.080 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:25:29 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:25:29.081 163291 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 05:25:29 np0005540825 nova_compute[256151]: 2025-12-01 10:25:29.211 256155 DEBUG nova.compute.manager [req-aa0752a2-78b8-4657-a7e3-d59a4da61f0b req-c964ae82-482a-4056-966d-9364534db860 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Received event network-changed-80410344-d9b7-4cc9-a8bc-ee566d46d0e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:25:29 np0005540825 nova_compute[256151]: 2025-12-01 10:25:29.211 256155 DEBUG nova.compute.manager [req-aa0752a2-78b8-4657-a7e3-d59a4da61f0b req-c964ae82-482a-4056-966d-9364534db860 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Refreshing instance network info cache due to event network-changed-80410344-d9b7-4cc9-a8bc-ee566d46d0e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 05:25:29 np0005540825 nova_compute[256151]: 2025-12-01 10:25:29.212 256155 DEBUG oslo_concurrency.lockutils [req-aa0752a2-78b8-4657-a7e3-d59a4da61f0b req-c964ae82-482a-4056-966d-9364534db860 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "refresh_cache-dd56af67-ae91-4891-b152-ac9a0f325fc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 05:25:29 np0005540825 nova_compute[256151]: 2025-12-01 10:25:29.212 256155 DEBUG oslo_concurrency.lockutils [req-aa0752a2-78b8-4657-a7e3-d59a4da61f0b req-c964ae82-482a-4056-966d-9364534db860 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquired lock "refresh_cache-dd56af67-ae91-4891-b152-ac9a0f325fc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 05:25:29 np0005540825 nova_compute[256151]: 2025-12-01 10:25:29.213 256155 DEBUG nova.network.neutron [req-aa0752a2-78b8-4657-a7e3-d59a4da61f0b req-c964ae82-482a-4056-966d-9364534db860 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Refreshing network info cache for port 80410344-d9b7-4cc9-a8bc-ee566d46d0e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 05:25:29 np0005540825 nova_compute[256151]: 2025-12-01 10:25:29.314 256155 DEBUG nova.compute.manager [req-f1692a74-f968-4e8e-b04b-1a20a0d69601 req-cce5ac0b-65f5-45c3-8d25-693cce248004 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Received event network-vif-unplugged-80410344-d9b7-4cc9-a8bc-ee566d46d0e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:25:29 np0005540825 nova_compute[256151]: 2025-12-01 10:25:29.315 256155 DEBUG oslo_concurrency.lockutils [req-f1692a74-f968-4e8e-b04b-1a20a0d69601 req-cce5ac0b-65f5-45c3-8d25-693cce248004 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "dd56af67-ae91-4891-b152-ac9a0f325fc5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:25:29 np0005540825 nova_compute[256151]: 2025-12-01 10:25:29.316 256155 DEBUG oslo_concurrency.lockutils [req-f1692a74-f968-4e8e-b04b-1a20a0d69601 req-cce5ac0b-65f5-45c3-8d25-693cce248004 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "dd56af67-ae91-4891-b152-ac9a0f325fc5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:25:29 np0005540825 nova_compute[256151]: 2025-12-01 10:25:29.316 256155 DEBUG oslo_concurrency.lockutils [req-f1692a74-f968-4e8e-b04b-1a20a0d69601 req-cce5ac0b-65f5-45c3-8d25-693cce248004 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "dd56af67-ae91-4891-b152-ac9a0f325fc5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:25:29 np0005540825 nova_compute[256151]: 2025-12-01 10:25:29.317 256155 DEBUG nova.compute.manager [req-f1692a74-f968-4e8e-b04b-1a20a0d69601 req-cce5ac0b-65f5-45c3-8d25-693cce248004 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] No waiting events found dispatching network-vif-unplugged-80410344-d9b7-4cc9-a8bc-ee566d46d0e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 05:25:29 np0005540825 nova_compute[256151]: 2025-12-01 10:25:29.317 256155 WARNING nova.compute.manager [req-f1692a74-f968-4e8e-b04b-1a20a0d69601 req-cce5ac0b-65f5-45c3-8d25-693cce248004 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Received unexpected event network-vif-unplugged-80410344-d9b7-4cc9-a8bc-ee566d46d0e4 for instance with vm_state active and task_state None.#033[00m
Dec  1 05:25:30 np0005540825 nova_compute[256151]: 2025-12-01 10:25:30.256 256155 INFO nova.compute.manager [None req-6aedbd0a-8774-4b8f-96d2-721af6f0fb91 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Get console output#033[00m
Dec  1 05:25:30 np0005540825 nova_compute[256151]: 2025-12-01 10:25:30.263 262942 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  1 05:25:30 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1047: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec  1 05:25:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:25:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:25:30.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:25:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:25:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:25:30.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:25:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:25:31 np0005540825 nova_compute[256151]: 2025-12-01 10:25:31.070 256155 DEBUG nova.network.neutron [req-aa0752a2-78b8-4657-a7e3-d59a4da61f0b req-c964ae82-482a-4056-966d-9364534db860 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Updated VIF entry in instance network info cache for port 80410344-d9b7-4cc9-a8bc-ee566d46d0e4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 05:25:31 np0005540825 nova_compute[256151]: 2025-12-01 10:25:31.070 256155 DEBUG nova.network.neutron [req-aa0752a2-78b8-4657-a7e3-d59a4da61f0b req-c964ae82-482a-4056-966d-9364534db860 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Updating instance_info_cache with network_info: [{"id": "80410344-d9b7-4cc9-a8bc-ee566d46d0e4", "address": "fa:16:3e:bd:ef:f0", "network": {"id": "82ec8f83-684f-44ae-8389-122bf8ed45ab", "bridge": "br-int", "label": "tempest-network-smoke--115101625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80410344-d9", "ovs_interfaceid": "80410344-d9b7-4cc9-a8bc-ee566d46d0e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 05:25:31 np0005540825 nova_compute[256151]: 2025-12-01 10:25:31.085 256155 DEBUG oslo_concurrency.lockutils [req-aa0752a2-78b8-4657-a7e3-d59a4da61f0b req-c964ae82-482a-4056-966d-9364534db860 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Releasing lock "refresh_cache-dd56af67-ae91-4891-b152-ac9a0f325fc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 05:25:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:25:31] "GET /metrics HTTP/1.1" 200 48562 "" "Prometheus/2.51.0"
Dec  1 05:25:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:25:31] "GET /metrics HTTP/1.1" 200 48562 "" "Prometheus/2.51.0"
Dec  1 05:25:31 np0005540825 nova_compute[256151]: 2025-12-01 10:25:31.398 256155 DEBUG nova.compute.manager [req-c08dad32-b538-45ee-b07f-8be2dfce8a7b req-db9482b8-10d8-4960-8aa6-53c5cbc515f5 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Received event network-vif-plugged-80410344-d9b7-4cc9-a8bc-ee566d46d0e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:25:31 np0005540825 nova_compute[256151]: 2025-12-01 10:25:31.399 256155 DEBUG oslo_concurrency.lockutils [req-c08dad32-b538-45ee-b07f-8be2dfce8a7b req-db9482b8-10d8-4960-8aa6-53c5cbc515f5 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "dd56af67-ae91-4891-b152-ac9a0f325fc5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:25:31 np0005540825 nova_compute[256151]: 2025-12-01 10:25:31.399 256155 DEBUG oslo_concurrency.lockutils [req-c08dad32-b538-45ee-b07f-8be2dfce8a7b req-db9482b8-10d8-4960-8aa6-53c5cbc515f5 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "dd56af67-ae91-4891-b152-ac9a0f325fc5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:25:31 np0005540825 nova_compute[256151]: 2025-12-01 10:25:31.400 256155 DEBUG oslo_concurrency.lockutils [req-c08dad32-b538-45ee-b07f-8be2dfce8a7b req-db9482b8-10d8-4960-8aa6-53c5cbc515f5 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "dd56af67-ae91-4891-b152-ac9a0f325fc5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:25:31 np0005540825 nova_compute[256151]: 2025-12-01 10:25:31.400 256155 DEBUG nova.compute.manager [req-c08dad32-b538-45ee-b07f-8be2dfce8a7b req-db9482b8-10d8-4960-8aa6-53c5cbc515f5 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] No waiting events found dispatching network-vif-plugged-80410344-d9b7-4cc9-a8bc-ee566d46d0e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 05:25:31 np0005540825 nova_compute[256151]: 2025-12-01 10:25:31.401 256155 WARNING nova.compute.manager [req-c08dad32-b538-45ee-b07f-8be2dfce8a7b req-db9482b8-10d8-4960-8aa6-53c5cbc515f5 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Received unexpected event network-vif-plugged-80410344-d9b7-4cc9-a8bc-ee566d46d0e4 for instance with vm_state active and task_state None.#033[00m
Dec  1 05:25:32 np0005540825 nova_compute[256151]: 2025-12-01 10:25:32.065 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:25:32 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:25:32.083 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4d9738cf-2abf-48e2-9303-677669784912, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:25:32 np0005540825 nova_compute[256151]: 2025-12-01 10:25:32.194 256155 DEBUG nova.compute.manager [req-286e00e3-d5f3-4845-b431-116de0b4d3c9 req-bd1b9f21-dd25-4279-ab2f-87dc231df351 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Received event network-changed-80410344-d9b7-4cc9-a8bc-ee566d46d0e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:25:32 np0005540825 nova_compute[256151]: 2025-12-01 10:25:32.194 256155 DEBUG nova.compute.manager [req-286e00e3-d5f3-4845-b431-116de0b4d3c9 req-bd1b9f21-dd25-4279-ab2f-87dc231df351 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Refreshing instance network info cache due to event network-changed-80410344-d9b7-4cc9-a8bc-ee566d46d0e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 05:25:32 np0005540825 nova_compute[256151]: 2025-12-01 10:25:32.195 256155 DEBUG oslo_concurrency.lockutils [req-286e00e3-d5f3-4845-b431-116de0b4d3c9 req-bd1b9f21-dd25-4279-ab2f-87dc231df351 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "refresh_cache-dd56af67-ae91-4891-b152-ac9a0f325fc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 05:25:32 np0005540825 nova_compute[256151]: 2025-12-01 10:25:32.195 256155 DEBUG oslo_concurrency.lockutils [req-286e00e3-d5f3-4845-b431-116de0b4d3c9 req-bd1b9f21-dd25-4279-ab2f-87dc231df351 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquired lock "refresh_cache-dd56af67-ae91-4891-b152-ac9a0f325fc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 05:25:32 np0005540825 nova_compute[256151]: 2025-12-01 10:25:32.195 256155 DEBUG nova.network.neutron [req-286e00e3-d5f3-4845-b431-116de0b4d3c9 req-bd1b9f21-dd25-4279-ab2f-87dc231df351 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Refreshing network info cache for port 80410344-d9b7-4cc9-a8bc-ee566d46d0e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 05:25:32 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1048: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec  1 05:25:32 np0005540825 nova_compute[256151]: 2025-12-01 10:25:32.338 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:25:32 np0005540825 nova_compute[256151]: 2025-12-01 10:25:32.387 256155 INFO nova.compute.manager [None req-5ae5043d-8e80-4929-a3ce-09e08b44fd06 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Get console output#033[00m
Dec  1 05:25:32 np0005540825 nova_compute[256151]: 2025-12-01 10:25:32.393 262942 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  1 05:25:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:25:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:25:32.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:25:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:25:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:25:32.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:25:33 np0005540825 nova_compute[256151]: 2025-12-01 10:25:33.514 256155 DEBUG nova.compute.manager [req-f44cc7bb-e74b-486c-ad39-5c22d65fd9fd req-4a0601e2-53e9-4633-8ca1-bcecf3d51632 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Received event network-vif-plugged-80410344-d9b7-4cc9-a8bc-ee566d46d0e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:25:33 np0005540825 nova_compute[256151]: 2025-12-01 10:25:33.514 256155 DEBUG oslo_concurrency.lockutils [req-f44cc7bb-e74b-486c-ad39-5c22d65fd9fd req-4a0601e2-53e9-4633-8ca1-bcecf3d51632 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "dd56af67-ae91-4891-b152-ac9a0f325fc5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:25:33 np0005540825 nova_compute[256151]: 2025-12-01 10:25:33.515 256155 DEBUG oslo_concurrency.lockutils [req-f44cc7bb-e74b-486c-ad39-5c22d65fd9fd req-4a0601e2-53e9-4633-8ca1-bcecf3d51632 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "dd56af67-ae91-4891-b152-ac9a0f325fc5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:25:33 np0005540825 nova_compute[256151]: 2025-12-01 10:25:33.516 256155 DEBUG oslo_concurrency.lockutils [req-f44cc7bb-e74b-486c-ad39-5c22d65fd9fd req-4a0601e2-53e9-4633-8ca1-bcecf3d51632 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "dd56af67-ae91-4891-b152-ac9a0f325fc5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:25:33 np0005540825 nova_compute[256151]: 2025-12-01 10:25:33.516 256155 DEBUG nova.compute.manager [req-f44cc7bb-e74b-486c-ad39-5c22d65fd9fd req-4a0601e2-53e9-4633-8ca1-bcecf3d51632 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] No waiting events found dispatching network-vif-plugged-80410344-d9b7-4cc9-a8bc-ee566d46d0e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 05:25:33 np0005540825 nova_compute[256151]: 2025-12-01 10:25:33.517 256155 WARNING nova.compute.manager [req-f44cc7bb-e74b-486c-ad39-5c22d65fd9fd req-4a0601e2-53e9-4633-8ca1-bcecf3d51632 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Received unexpected event network-vif-plugged-80410344-d9b7-4cc9-a8bc-ee566d46d0e4 for instance with vm_state active and task_state None.#033[00m
Dec  1 05:25:33 np0005540825 nova_compute[256151]: 2025-12-01 10:25:33.517 256155 DEBUG nova.compute.manager [req-f44cc7bb-e74b-486c-ad39-5c22d65fd9fd req-4a0601e2-53e9-4633-8ca1-bcecf3d51632 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Received event network-vif-plugged-80410344-d9b7-4cc9-a8bc-ee566d46d0e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:25:33 np0005540825 nova_compute[256151]: 2025-12-01 10:25:33.517 256155 DEBUG oslo_concurrency.lockutils [req-f44cc7bb-e74b-486c-ad39-5c22d65fd9fd req-4a0601e2-53e9-4633-8ca1-bcecf3d51632 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "dd56af67-ae91-4891-b152-ac9a0f325fc5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:25:33 np0005540825 nova_compute[256151]: 2025-12-01 10:25:33.518 256155 DEBUG oslo_concurrency.lockutils [req-f44cc7bb-e74b-486c-ad39-5c22d65fd9fd req-4a0601e2-53e9-4633-8ca1-bcecf3d51632 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "dd56af67-ae91-4891-b152-ac9a0f325fc5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:25:33 np0005540825 nova_compute[256151]: 2025-12-01 10:25:33.518 256155 DEBUG oslo_concurrency.lockutils [req-f44cc7bb-e74b-486c-ad39-5c22d65fd9fd req-4a0601e2-53e9-4633-8ca1-bcecf3d51632 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "dd56af67-ae91-4891-b152-ac9a0f325fc5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:25:33 np0005540825 nova_compute[256151]: 2025-12-01 10:25:33.518 256155 DEBUG nova.compute.manager [req-f44cc7bb-e74b-486c-ad39-5c22d65fd9fd req-4a0601e2-53e9-4633-8ca1-bcecf3d51632 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] No waiting events found dispatching network-vif-plugged-80410344-d9b7-4cc9-a8bc-ee566d46d0e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 05:25:33 np0005540825 nova_compute[256151]: 2025-12-01 10:25:33.519 256155 WARNING nova.compute.manager [req-f44cc7bb-e74b-486c-ad39-5c22d65fd9fd req-4a0601e2-53e9-4633-8ca1-bcecf3d51632 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Received unexpected event network-vif-plugged-80410344-d9b7-4cc9-a8bc-ee566d46d0e4 for instance with vm_state active and task_state None.#033[00m
Dec  1 05:25:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:25:33.711Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:25:33 np0005540825 nova_compute[256151]: 2025-12-01 10:25:33.985 256155 DEBUG nova.network.neutron [req-286e00e3-d5f3-4845-b431-116de0b4d3c9 req-bd1b9f21-dd25-4279-ab2f-87dc231df351 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Updated VIF entry in instance network info cache for port 80410344-d9b7-4cc9-a8bc-ee566d46d0e4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 05:25:33 np0005540825 nova_compute[256151]: 2025-12-01 10:25:33.986 256155 DEBUG nova.network.neutron [req-286e00e3-d5f3-4845-b431-116de0b4d3c9 req-bd1b9f21-dd25-4279-ab2f-87dc231df351 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Updating instance_info_cache with network_info: [{"id": "80410344-d9b7-4cc9-a8bc-ee566d46d0e4", "address": "fa:16:3e:bd:ef:f0", "network": {"id": "82ec8f83-684f-44ae-8389-122bf8ed45ab", "bridge": "br-int", "label": "tempest-network-smoke--115101625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80410344-d9", "ovs_interfaceid": "80410344-d9b7-4cc9-a8bc-ee566d46d0e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 05:25:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:25:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:25:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:34 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:25:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:34 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:25:34 np0005540825 nova_compute[256151]: 2025-12-01 10:25:34.003 256155 DEBUG oslo_concurrency.lockutils [req-286e00e3-d5f3-4845-b431-116de0b4d3c9 req-bd1b9f21-dd25-4279-ab2f-87dc231df351 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Releasing lock "refresh_cache-dd56af67-ae91-4891-b152-ac9a0f325fc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 05:25:34 np0005540825 podman[277660]: 2025-12-01 10:25:34.242824512 +0000 UTC m=+0.099922406 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Dec  1 05:25:34 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1049: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 7.5 KiB/s rd, 14 KiB/s wr, 2 op/s
Dec  1 05:25:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:25:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:25:34.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:25:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:25:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:25:34.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:25:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:25:36 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1050: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 22 KiB/s wr, 30 op/s
Dec  1 05:25:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:25:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:25:36.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:25:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:25:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:25:36.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:25:37 np0005540825 nova_compute[256151]: 2025-12-01 10:25:37.114 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:25:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:25:37.266Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:25:37 np0005540825 nova_compute[256151]: 2025-12-01 10:25:37.340 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:25:37 np0005540825 nova_compute[256151]: 2025-12-01 10:25:37.828 256155 DEBUG oslo_concurrency.lockutils [None req-d4965f5c-5ce4-427c-903f-c8332754cb22 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "dd56af67-ae91-4891-b152-ac9a0f325fc5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:25:37 np0005540825 nova_compute[256151]: 2025-12-01 10:25:37.828 256155 DEBUG oslo_concurrency.lockutils [None req-d4965f5c-5ce4-427c-903f-c8332754cb22 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "dd56af67-ae91-4891-b152-ac9a0f325fc5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:25:37 np0005540825 nova_compute[256151]: 2025-12-01 10:25:37.829 256155 DEBUG oslo_concurrency.lockutils [None req-d4965f5c-5ce4-427c-903f-c8332754cb22 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "dd56af67-ae91-4891-b152-ac9a0f325fc5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:25:37 np0005540825 nova_compute[256151]: 2025-12-01 10:25:37.829 256155 DEBUG oslo_concurrency.lockutils [None req-d4965f5c-5ce4-427c-903f-c8332754cb22 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "dd56af67-ae91-4891-b152-ac9a0f325fc5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:25:37 np0005540825 nova_compute[256151]: 2025-12-01 10:25:37.830 256155 DEBUG oslo_concurrency.lockutils [None req-d4965f5c-5ce4-427c-903f-c8332754cb22 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "dd56af67-ae91-4891-b152-ac9a0f325fc5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:25:37 np0005540825 nova_compute[256151]: 2025-12-01 10:25:37.832 256155 INFO nova.compute.manager [None req-d4965f5c-5ce4-427c-903f-c8332754cb22 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Terminating instance#033[00m
Dec  1 05:25:37 np0005540825 nova_compute[256151]: 2025-12-01 10:25:37.834 256155 DEBUG nova.compute.manager [None req-d4965f5c-5ce4-427c-903f-c8332754cb22 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 05:25:37 np0005540825 kernel: tap80410344-d9 (unregistering): left promiscuous mode
Dec  1 05:25:37 np0005540825 NetworkManager[48963]: <info>  [1764584737.9012] device (tap80410344-d9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 05:25:37 np0005540825 ovn_controller[153404]: 2025-12-01T10:25:37Z|00086|binding|INFO|Releasing lport 80410344-d9b7-4cc9-a8bc-ee566d46d0e4 from this chassis (sb_readonly=0)
Dec  1 05:25:37 np0005540825 ovn_controller[153404]: 2025-12-01T10:25:37Z|00087|binding|INFO|Setting lport 80410344-d9b7-4cc9-a8bc-ee566d46d0e4 down in Southbound
Dec  1 05:25:37 np0005540825 nova_compute[256151]: 2025-12-01 10:25:37.913 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:25:37 np0005540825 ovn_controller[153404]: 2025-12-01T10:25:37Z|00088|binding|INFO|Removing iface tap80410344-d9 ovn-installed in OVS
Dec  1 05:25:37 np0005540825 nova_compute[256151]: 2025-12-01 10:25:37.915 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:25:37 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:25:37.923 163291 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:ef:f0 10.100.0.11'], port_security=['fa:16:3e:bd:ef:f0 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'dd56af67-ae91-4891-b152-ac9a0f325fc5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-82ec8f83-684f-44ae-8389-122bf8ed45ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f6be4e572624210b91193c011607c08', 'neutron:revision_number': '8', 'neutron:security_group_ids': '2936a540-1cab-4590-a9db-6bce6aab5d9e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd915a5f-666a-4c2a-9612-6191ae438030, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3429b436d0>], logical_port=80410344-d9b7-4cc9-a8bc-ee566d46d0e4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3429b436d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 05:25:37 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:25:37.924 163291 INFO neutron.agent.ovn.metadata.agent [-] Port 80410344-d9b7-4cc9-a8bc-ee566d46d0e4 in datapath 82ec8f83-684f-44ae-8389-122bf8ed45ab unbound from our chassis#033[00m
Dec  1 05:25:37 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:25:37.924 163291 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 82ec8f83-684f-44ae-8389-122bf8ed45ab, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  1 05:25:37 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:25:37.926 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[86901f9e-b18d-4c30-a469-62be16b4b930]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:25:37 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:25:37.926 163291 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-82ec8f83-684f-44ae-8389-122bf8ed45ab namespace which is not needed anymore#033[00m
Dec  1 05:25:37 np0005540825 nova_compute[256151]: 2025-12-01 10:25:37.945 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:25:37 np0005540825 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Dec  1 05:25:37 np0005540825 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000b.scope: Consumed 15.870s CPU time.
Dec  1 05:25:37 np0005540825 systemd-machined[216307]: Machine qemu-6-instance-0000000b terminated.
Dec  1 05:25:38 np0005540825 podman[277684]: 2025-12-01 10:25:38.017457352 +0000 UTC m=+0.081693086 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:25:38 np0005540825 nova_compute[256151]: 2025-12-01 10:25:38.070 256155 INFO nova.virt.libvirt.driver [-] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Instance destroyed successfully.#033[00m
Dec  1 05:25:38 np0005540825 nova_compute[256151]: 2025-12-01 10:25:38.071 256155 DEBUG nova.objects.instance [None req-d4965f5c-5ce4-427c-903f-c8332754cb22 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lazy-loading 'resources' on Instance uuid dd56af67-ae91-4891-b152-ac9a0f325fc5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 05:25:38 np0005540825 neutron-haproxy-ovnmeta-82ec8f83-684f-44ae-8389-122bf8ed45ab[276693]: [NOTICE]   (276697) : haproxy version is 2.8.14-c23fe91
Dec  1 05:25:38 np0005540825 neutron-haproxy-ovnmeta-82ec8f83-684f-44ae-8389-122bf8ed45ab[276693]: [NOTICE]   (276697) : path to executable is /usr/sbin/haproxy
Dec  1 05:25:38 np0005540825 neutron-haproxy-ovnmeta-82ec8f83-684f-44ae-8389-122bf8ed45ab[276693]: [WARNING]  (276697) : Exiting Master process...
Dec  1 05:25:38 np0005540825 neutron-haproxy-ovnmeta-82ec8f83-684f-44ae-8389-122bf8ed45ab[276693]: [ALERT]    (276697) : Current worker (276699) exited with code 143 (Terminated)
Dec  1 05:25:38 np0005540825 neutron-haproxy-ovnmeta-82ec8f83-684f-44ae-8389-122bf8ed45ab[276693]: [WARNING]  (276697) : All workers exited. Exiting... (0)
Dec  1 05:25:38 np0005540825 systemd[1]: libpod-eadf37a3ab332a08e31ae832dd169ad711e64d25765d40eea90159f455b1fb40.scope: Deactivated successfully.
Dec  1 05:25:38 np0005540825 nova_compute[256151]: 2025-12-01 10:25:38.091 256155 DEBUG nova.virt.libvirt.vif [None req-d4965f5c-5ce4-427c-903f-c8332754cb22 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T10:24:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1744736494',display_name='tempest-TestNetworkBasicOps-server-1744736494',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1744736494',id=11,image_ref='8f75d6de-6ce0-44e1-b417-d0111424475b',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFEtm0tPdDT/qfCstlsxaIuU7F73TYcccr1SL0AFFhbSP6QyY3W7FSBEr169NqnltBPMCF/mGTi3JWFSUnlZAo+KOT76m6a5IiHBdDTIPsf63wASE4wAGvguH8uhatHBgg==',key_name='tempest-TestNetworkBasicOps-1704588061',keypairs=<?>,launch_index=0,launched_at=2025-12-01T10:24:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9f6be4e572624210b91193c011607c08',ramdisk_id='',reservation_id='r-b57vd3uf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8f75d6de-6ce0-44e1-b417-d0111424475b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1248115384',owner_user_name='tempest-TestNetworkBasicOps-1248115384-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T10:24:44Z,user_data=None,user_id='5b56a238daf0445798410e51caada0ff',uuid=dd56af67-ae91-4891-b152-ac9a0f325fc5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "80410344-d9b7-4cc9-a8bc-ee566d46d0e4", "address": "fa:16:3e:bd:ef:f0", "network": {"id": "82ec8f83-684f-44ae-8389-122bf8ed45ab", "bridge": "br-int", "label": "tempest-network-smoke--115101625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80410344-d9", "ovs_interfaceid": "80410344-d9b7-4cc9-a8bc-ee566d46d0e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 05:25:38 np0005540825 nova_compute[256151]: 2025-12-01 10:25:38.091 256155 DEBUG nova.network.os_vif_util [None req-d4965f5c-5ce4-427c-903f-c8332754cb22 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Converting VIF {"id": "80410344-d9b7-4cc9-a8bc-ee566d46d0e4", "address": "fa:16:3e:bd:ef:f0", "network": {"id": "82ec8f83-684f-44ae-8389-122bf8ed45ab", "bridge": "br-int", "label": "tempest-network-smoke--115101625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80410344-d9", "ovs_interfaceid": "80410344-d9b7-4cc9-a8bc-ee566d46d0e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 05:25:38 np0005540825 nova_compute[256151]: 2025-12-01 10:25:38.092 256155 DEBUG nova.network.os_vif_util [None req-d4965f5c-5ce4-427c-903f-c8332754cb22 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bd:ef:f0,bridge_name='br-int',has_traffic_filtering=True,id=80410344-d9b7-4cc9-a8bc-ee566d46d0e4,network=Network(82ec8f83-684f-44ae-8389-122bf8ed45ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap80410344-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 05:25:38 np0005540825 nova_compute[256151]: 2025-12-01 10:25:38.093 256155 DEBUG os_vif [None req-d4965f5c-5ce4-427c-903f-c8332754cb22 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bd:ef:f0,bridge_name='br-int',has_traffic_filtering=True,id=80410344-d9b7-4cc9-a8bc-ee566d46d0e4,network=Network(82ec8f83-684f-44ae-8389-122bf8ed45ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap80410344-d9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 05:25:38 np0005540825 nova_compute[256151]: 2025-12-01 10:25:38.094 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:25:38 np0005540825 nova_compute[256151]: 2025-12-01 10:25:38.095 256155 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap80410344-d9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:25:38 np0005540825 nova_compute[256151]: 2025-12-01 10:25:38.097 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:25:38 np0005540825 podman[277729]: 2025-12-01 10:25:38.098716715 +0000 UTC m=+0.057554287 container died eadf37a3ab332a08e31ae832dd169ad711e64d25765d40eea90159f455b1fb40 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82ec8f83-684f-44ae-8389-122bf8ed45ab, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec  1 05:25:38 np0005540825 nova_compute[256151]: 2025-12-01 10:25:38.099 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 05:25:38 np0005540825 nova_compute[256151]: 2025-12-01 10:25:38.102 256155 INFO os_vif [None req-d4965f5c-5ce4-427c-903f-c8332754cb22 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bd:ef:f0,bridge_name='br-int',has_traffic_filtering=True,id=80410344-d9b7-4cc9-a8bc-ee566d46d0e4,network=Network(82ec8f83-684f-44ae-8389-122bf8ed45ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap80410344-d9')#033[00m
Dec  1 05:25:38 np0005540825 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eadf37a3ab332a08e31ae832dd169ad711e64d25765d40eea90159f455b1fb40-userdata-shm.mount: Deactivated successfully.
Dec  1 05:25:38 np0005540825 systemd[1]: var-lib-containers-storage-overlay-9374c3da3a895811f94c67424462b6a6a3148c1be79e842f5edcc9e7505b6bde-merged.mount: Deactivated successfully.
Dec  1 05:25:38 np0005540825 podman[277729]: 2025-12-01 10:25:38.140808896 +0000 UTC m=+0.099646448 container cleanup eadf37a3ab332a08e31ae832dd169ad711e64d25765d40eea90159f455b1fb40 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82ec8f83-684f-44ae-8389-122bf8ed45ab, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 05:25:38 np0005540825 systemd[1]: libpod-conmon-eadf37a3ab332a08e31ae832dd169ad711e64d25765d40eea90159f455b1fb40.scope: Deactivated successfully.
Dec  1 05:25:38 np0005540825 podman[277786]: 2025-12-01 10:25:38.20160796 +0000 UTC m=+0.036902563 container remove eadf37a3ab332a08e31ae832dd169ad711e64d25765d40eea90159f455b1fb40 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82ec8f83-684f-44ae-8389-122bf8ed45ab, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  1 05:25:38 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:25:38.207 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[c844a0d1-bf88-40e4-b818-926f8ca18e72]: (4, ('Mon Dec  1 10:25:38 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-82ec8f83-684f-44ae-8389-122bf8ed45ab (eadf37a3ab332a08e31ae832dd169ad711e64d25765d40eea90159f455b1fb40)\neadf37a3ab332a08e31ae832dd169ad711e64d25765d40eea90159f455b1fb40\nMon Dec  1 10:25:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-82ec8f83-684f-44ae-8389-122bf8ed45ab (eadf37a3ab332a08e31ae832dd169ad711e64d25765d40eea90159f455b1fb40)\neadf37a3ab332a08e31ae832dd169ad711e64d25765d40eea90159f455b1fb40\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:25:38 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:25:38.209 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[1dacf997-6b68-4f9e-9217-4f6e0f88e221]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:25:38 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:25:38.210 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82ec8f83-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:25:38 np0005540825 nova_compute[256151]: 2025-12-01 10:25:38.211 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:25:38 np0005540825 kernel: tap82ec8f83-60: left promiscuous mode
Dec  1 05:25:38 np0005540825 nova_compute[256151]: 2025-12-01 10:25:38.225 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:25:38 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:25:38.226 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[3f30f52c-c3d2-42dd-a34b-eb012e9f0cc8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:25:38 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:25:38.250 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[8bdb7059-6222-4ff6-833c-9f03ee977222]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:25:38 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:25:38.251 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[d87925d0-d23c-48d2-ac3c-963f6d0ef09e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:25:38 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:25:38.265 262668 DEBUG oslo.privsep.daemon [-] privsep: reply[9f06df5d-032a-4d4a-b58d-f52ee760dd7b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455251, 'reachable_time': 39728, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277801, 'error': None, 'target': 'ovnmeta-82ec8f83-684f-44ae-8389-122bf8ed45ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:25:38 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:25:38.267 163408 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-82ec8f83-684f-44ae-8389-122bf8ed45ab deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  1 05:25:38 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:25:38.267 163408 DEBUG oslo.privsep.daemon [-] privsep: reply[d4328039-96d7-4a62-83ce-8ebd7945b0a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 05:25:38 np0005540825 systemd[1]: run-netns-ovnmeta\x2d82ec8f83\x2d684f\x2d44ae\x2d8389\x2d122bf8ed45ab.mount: Deactivated successfully.
Dec  1 05:25:38 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1051: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 10 KiB/s wr, 29 op/s
Dec  1 05:25:38 np0005540825 nova_compute[256151]: 2025-12-01 10:25:38.343 256155 DEBUG nova.compute.manager [req-5c234607-da1b-4bb7-be6c-e2ff1b216d92 req-63d3ba5d-0c93-4f54-96af-8f5c7c659d53 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Received event network-vif-unplugged-80410344-d9b7-4cc9-a8bc-ee566d46d0e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:25:38 np0005540825 nova_compute[256151]: 2025-12-01 10:25:38.344 256155 DEBUG oslo_concurrency.lockutils [req-5c234607-da1b-4bb7-be6c-e2ff1b216d92 req-63d3ba5d-0c93-4f54-96af-8f5c7c659d53 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "dd56af67-ae91-4891-b152-ac9a0f325fc5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:25:38 np0005540825 nova_compute[256151]: 2025-12-01 10:25:38.344 256155 DEBUG oslo_concurrency.lockutils [req-5c234607-da1b-4bb7-be6c-e2ff1b216d92 req-63d3ba5d-0c93-4f54-96af-8f5c7c659d53 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "dd56af67-ae91-4891-b152-ac9a0f325fc5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:25:38 np0005540825 nova_compute[256151]: 2025-12-01 10:25:38.344 256155 DEBUG oslo_concurrency.lockutils [req-5c234607-da1b-4bb7-be6c-e2ff1b216d92 req-63d3ba5d-0c93-4f54-96af-8f5c7c659d53 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "dd56af67-ae91-4891-b152-ac9a0f325fc5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:25:38 np0005540825 nova_compute[256151]: 2025-12-01 10:25:38.344 256155 DEBUG nova.compute.manager [req-5c234607-da1b-4bb7-be6c-e2ff1b216d92 req-63d3ba5d-0c93-4f54-96af-8f5c7c659d53 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] No waiting events found dispatching network-vif-unplugged-80410344-d9b7-4cc9-a8bc-ee566d46d0e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 05:25:38 np0005540825 nova_compute[256151]: 2025-12-01 10:25:38.344 256155 DEBUG nova.compute.manager [req-5c234607-da1b-4bb7-be6c-e2ff1b216d92 req-63d3ba5d-0c93-4f54-96af-8f5c7c659d53 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Received event network-vif-unplugged-80410344-d9b7-4cc9-a8bc-ee566d46d0e4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  1 05:25:38 np0005540825 nova_compute[256151]: 2025-12-01 10:25:38.502 256155 DEBUG nova.compute.manager [req-bef9756d-080a-4df0-89a5-97e94c2b71dd req-46464fb9-8cad-49d2-8b90-7f91f3eb59a2 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Received event network-changed-80410344-d9b7-4cc9-a8bc-ee566d46d0e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:25:38 np0005540825 nova_compute[256151]: 2025-12-01 10:25:38.503 256155 DEBUG nova.compute.manager [req-bef9756d-080a-4df0-89a5-97e94c2b71dd req-46464fb9-8cad-49d2-8b90-7f91f3eb59a2 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Refreshing instance network info cache due to event network-changed-80410344-d9b7-4cc9-a8bc-ee566d46d0e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 05:25:38 np0005540825 nova_compute[256151]: 2025-12-01 10:25:38.503 256155 DEBUG oslo_concurrency.lockutils [req-bef9756d-080a-4df0-89a5-97e94c2b71dd req-46464fb9-8cad-49d2-8b90-7f91f3eb59a2 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "refresh_cache-dd56af67-ae91-4891-b152-ac9a0f325fc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 05:25:38 np0005540825 nova_compute[256151]: 2025-12-01 10:25:38.504 256155 DEBUG oslo_concurrency.lockutils [req-bef9756d-080a-4df0-89a5-97e94c2b71dd req-46464fb9-8cad-49d2-8b90-7f91f3eb59a2 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquired lock "refresh_cache-dd56af67-ae91-4891-b152-ac9a0f325fc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 05:25:38 np0005540825 nova_compute[256151]: 2025-12-01 10:25:38.504 256155 DEBUG nova.network.neutron [req-bef9756d-080a-4df0-89a5-97e94c2b71dd req-46464fb9-8cad-49d2-8b90-7f91f3eb59a2 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Refreshing network info cache for port 80410344-d9b7-4cc9-a8bc-ee566d46d0e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 05:25:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:25:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:25:38.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:25:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:25:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:25:38.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:25:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:25:38.848Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:25:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:25:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:25:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:25:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:39 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:25:39 np0005540825 nova_compute[256151]: 2025-12-01 10:25:39.157 256155 INFO nova.virt.libvirt.driver [None req-d4965f5c-5ce4-427c-903f-c8332754cb22 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Deleting instance files /var/lib/nova/instances/dd56af67-ae91-4891-b152-ac9a0f325fc5_del#033[00m
Dec  1 05:25:39 np0005540825 nova_compute[256151]: 2025-12-01 10:25:39.157 256155 INFO nova.virt.libvirt.driver [None req-d4965f5c-5ce4-427c-903f-c8332754cb22 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Deletion of /var/lib/nova/instances/dd56af67-ae91-4891-b152-ac9a0f325fc5_del complete#033[00m
Dec  1 05:25:39 np0005540825 nova_compute[256151]: 2025-12-01 10:25:39.236 256155 INFO nova.compute.manager [None req-d4965f5c-5ce4-427c-903f-c8332754cb22 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Took 1.40 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 05:25:39 np0005540825 nova_compute[256151]: 2025-12-01 10:25:39.237 256155 DEBUG oslo.service.loopingcall [None req-d4965f5c-5ce4-427c-903f-c8332754cb22 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 05:25:39 np0005540825 nova_compute[256151]: 2025-12-01 10:25:39.237 256155 DEBUG nova.compute.manager [-] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 05:25:39 np0005540825 nova_compute[256151]: 2025-12-01 10:25:39.238 256155 DEBUG nova.network.neutron [-] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 05:25:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_10:25:39
Dec  1 05:25:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 05:25:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 05:25:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['default.rgw.control', '.nfs', '.mgr', 'images', 'cephfs.cephfs.data', 'vms', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', 'backups', 'default.rgw.meta', 'default.rgw.log']
Dec  1 05:25:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 05:25:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:25:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:25:39 np0005540825 nova_compute[256151]: 2025-12-01 10:25:39.605 256155 DEBUG nova.network.neutron [req-bef9756d-080a-4df0-89a5-97e94c2b71dd req-46464fb9-8cad-49d2-8b90-7f91f3eb59a2 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Updated VIF entry in instance network info cache for port 80410344-d9b7-4cc9-a8bc-ee566d46d0e4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 05:25:39 np0005540825 nova_compute[256151]: 2025-12-01 10:25:39.606 256155 DEBUG nova.network.neutron [req-bef9756d-080a-4df0-89a5-97e94c2b71dd req-46464fb9-8cad-49d2-8b90-7f91f3eb59a2 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Updating instance_info_cache with network_info: [{"id": "80410344-d9b7-4cc9-a8bc-ee566d46d0e4", "address": "fa:16:3e:bd:ef:f0", "network": {"id": "82ec8f83-684f-44ae-8389-122bf8ed45ab", "bridge": "br-int", "label": "tempest-network-smoke--115101625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f6be4e572624210b91193c011607c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80410344-d9", "ovs_interfaceid": "80410344-d9b7-4cc9-a8bc-ee566d46d0e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 05:25:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:25:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:25:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:25:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:25:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:25:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:25:39 np0005540825 nova_compute[256151]: 2025-12-01 10:25:39.646 256155 DEBUG oslo_concurrency.lockutils [req-bef9756d-080a-4df0-89a5-97e94c2b71dd req-46464fb9-8cad-49d2-8b90-7f91f3eb59a2 dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Releasing lock "refresh_cache-dd56af67-ae91-4891-b152-ac9a0f325fc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 05:25:39 np0005540825 nova_compute[256151]: 2025-12-01 10:25:39.966 256155 DEBUG nova.network.neutron [-] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 05:25:39 np0005540825 nova_compute[256151]: 2025-12-01 10:25:39.982 256155 INFO nova.compute.manager [-] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Took 0.74 seconds to deallocate network for instance.#033[00m
Dec  1 05:25:40 np0005540825 nova_compute[256151]: 2025-12-01 10:25:40.026 256155 DEBUG oslo_concurrency.lockutils [None req-d4965f5c-5ce4-427c-903f-c8332754cb22 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:25:40 np0005540825 nova_compute[256151]: 2025-12-01 10:25:40.026 256155 DEBUG oslo_concurrency.lockutils [None req-d4965f5c-5ce4-427c-903f-c8332754cb22 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:25:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 05:25:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:25:40 np0005540825 nova_compute[256151]: 2025-12-01 10:25:40.135 256155 DEBUG oslo_concurrency.processutils [None req-d4965f5c-5ce4-427c-903f-c8332754cb22 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:25:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 05:25:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:25:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 05:25:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:25:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:25:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:25:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007601633212867096 of space, bias 1.0, pg target 0.22804899638601286 quantized to 32 (current 32)
Dec  1 05:25:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:25:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:25:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:25:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:25:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:25:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  1 05:25:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:25:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:25:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:25:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:25:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:25:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:25:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:25:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:25:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:25:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:25:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:25:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:25:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:25:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:25:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:25:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:25:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:25:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:25:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:25:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:25:40 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1052: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 10 KiB/s wr, 29 op/s
Dec  1 05:25:40 np0005540825 nova_compute[256151]: 2025-12-01 10:25:40.437 256155 DEBUG nova.compute.manager [req-cc09d665-d9c1-4557-971e-e5c2450d1383 req-e727daa6-8baa-4ad0-a9af-db83d6cee1dd dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Received event network-vif-plugged-80410344-d9b7-4cc9-a8bc-ee566d46d0e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:25:40 np0005540825 nova_compute[256151]: 2025-12-01 10:25:40.438 256155 DEBUG oslo_concurrency.lockutils [req-cc09d665-d9c1-4557-971e-e5c2450d1383 req-e727daa6-8baa-4ad0-a9af-db83d6cee1dd dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Acquiring lock "dd56af67-ae91-4891-b152-ac9a0f325fc5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:25:40 np0005540825 nova_compute[256151]: 2025-12-01 10:25:40.439 256155 DEBUG oslo_concurrency.lockutils [req-cc09d665-d9c1-4557-971e-e5c2450d1383 req-e727daa6-8baa-4ad0-a9af-db83d6cee1dd dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "dd56af67-ae91-4891-b152-ac9a0f325fc5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:25:40 np0005540825 nova_compute[256151]: 2025-12-01 10:25:40.439 256155 DEBUG oslo_concurrency.lockutils [req-cc09d665-d9c1-4557-971e-e5c2450d1383 req-e727daa6-8baa-4ad0-a9af-db83d6cee1dd dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] Lock "dd56af67-ae91-4891-b152-ac9a0f325fc5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:25:40 np0005540825 nova_compute[256151]: 2025-12-01 10:25:40.439 256155 DEBUG nova.compute.manager [req-cc09d665-d9c1-4557-971e-e5c2450d1383 req-e727daa6-8baa-4ad0-a9af-db83d6cee1dd dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] No waiting events found dispatching network-vif-plugged-80410344-d9b7-4cc9-a8bc-ee566d46d0e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 05:25:40 np0005540825 nova_compute[256151]: 2025-12-01 10:25:40.439 256155 WARNING nova.compute.manager [req-cc09d665-d9c1-4557-971e-e5c2450d1383 req-e727daa6-8baa-4ad0-a9af-db83d6cee1dd dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Received unexpected event network-vif-plugged-80410344-d9b7-4cc9-a8bc-ee566d46d0e4 for instance with vm_state deleted and task_state None.#033[00m
Dec  1 05:25:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:25:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:25:40.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:25:40 np0005540825 nova_compute[256151]: 2025-12-01 10:25:40.577 256155 DEBUG nova.compute.manager [req-1c3d401a-b864-453a-90e7-0fa874a6a1f2 req-c7f17ba9-270b-469c-847f-ff01f564d53f dacba8d8330f4064ba77b4caeb0c4756 701c7475017845dbbaa4460b007ffc6f - - default default] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Received event network-vif-deleted-80410344-d9b7-4cc9-a8bc-ee566d46d0e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 05:25:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:25:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:25:40.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:25:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:25:40 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1783100042' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:25:40 np0005540825 nova_compute[256151]: 2025-12-01 10:25:40.623 256155 DEBUG oslo_concurrency.processutils [None req-d4965f5c-5ce4-427c-903f-c8332754cb22 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:25:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:25:40 np0005540825 nova_compute[256151]: 2025-12-01 10:25:40.628 256155 DEBUG nova.compute.provider_tree [None req-d4965f5c-5ce4-427c-903f-c8332754cb22 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Inventory has not changed in ProviderTree for provider: 5efe20fe-1981-4bd9-8786-d9fddc89a5ae update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 05:25:40 np0005540825 nova_compute[256151]: 2025-12-01 10:25:40.643 256155 DEBUG nova.scheduler.client.report [None req-d4965f5c-5ce4-427c-903f-c8332754cb22 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Inventory has not changed for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 05:25:40 np0005540825 nova_compute[256151]: 2025-12-01 10:25:40.663 256155 DEBUG oslo_concurrency.lockutils [None req-d4965f5c-5ce4-427c-903f-c8332754cb22 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:25:40 np0005540825 nova_compute[256151]: 2025-12-01 10:25:40.694 256155 INFO nova.scheduler.client.report [None req-d4965f5c-5ce4-427c-903f-c8332754cb22 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Deleted allocations for instance dd56af67-ae91-4891-b152-ac9a0f325fc5#033[00m
Dec  1 05:25:40 np0005540825 nova_compute[256151]: 2025-12-01 10:25:40.760 256155 DEBUG oslo_concurrency.lockutils [None req-d4965f5c-5ce4-427c-903f-c8332754cb22 5b56a238daf0445798410e51caada0ff 9f6be4e572624210b91193c011607c08 - - default default] Lock "dd56af67-ae91-4891-b152-ac9a0f325fc5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.932s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:25:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:25:41] "GET /metrics HTTP/1.1" 200 48557 "" "Prometheus/2.51.0"
Dec  1 05:25:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:25:41] "GET /metrics HTTP/1.1" 200 48557 "" "Prometheus/2.51.0"
Dec  1 05:25:42 np0005540825 nova_compute[256151]: 2025-12-01 10:25:42.116 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:25:42 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1053: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 12 KiB/s wr, 57 op/s
Dec  1 05:25:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:25:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:25:42.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:25:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:25:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:25:42.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:25:43 np0005540825 nova_compute[256151]: 2025-12-01 10:25:43.141 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:25:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:25:43.712Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:25:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:25:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:25:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:25:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:44 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:25:44 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1054: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 9.3 KiB/s wr, 56 op/s
Dec  1 05:25:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:25:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:25:44.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:25:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:25:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:25:44.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:25:45 np0005540825 nova_compute[256151]: 2025-12-01 10:25:45.583 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:25:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:25:45 np0005540825 nova_compute[256151]: 2025-12-01 10:25:45.754 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:25:46 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1055: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 9.3 KiB/s wr, 56 op/s
Dec  1 05:25:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:25:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:25:46.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:25:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:25:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:25:46.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:25:47 np0005540825 nova_compute[256151]: 2025-12-01 10:25:47.118 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:25:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:25:47.268Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:25:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:25:47.269Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:25:48 np0005540825 nova_compute[256151]: 2025-12-01 10:25:48.145 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:25:48 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1056: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec  1 05:25:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:25:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:25:48.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:25:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:25:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:25:48.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:25:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:25:48.848Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:25:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:25:48.848Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:25:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:25:48.848Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:25:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:25:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:25:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:25:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:49 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:25:50 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1057: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec  1 05:25:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:25:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:25:50.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:25:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:25:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:25:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:25:50.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:25:51 np0005540825 podman[277863]: 2025-12-01 10:25:51.258404127 +0000 UTC m=+0.118197368 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  1 05:25:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:25:51] "GET /metrics HTTP/1.1" 200 48557 "" "Prometheus/2.51.0"
Dec  1 05:25:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:25:51] "GET /metrics HTTP/1.1" 200 48557 "" "Prometheus/2.51.0"
Dec  1 05:25:52 np0005540825 nova_compute[256151]: 2025-12-01 10:25:52.120 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:25:52 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1058: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec  1 05:25:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:25:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:25:52.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:25:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:25:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:25:52.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:25:53 np0005540825 nova_compute[256151]: 2025-12-01 10:25:53.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:25:53 np0005540825 nova_compute[256151]: 2025-12-01 10:25:53.028 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 05:25:53 np0005540825 nova_compute[256151]: 2025-12-01 10:25:53.028 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 05:25:53 np0005540825 nova_compute[256151]: 2025-12-01 10:25:53.050 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 05:25:53 np0005540825 nova_compute[256151]: 2025-12-01 10:25:53.069 256155 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764584738.0681658, dd56af67-ae91-4891-b152-ac9a0f325fc5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 05:25:53 np0005540825 nova_compute[256151]: 2025-12-01 10:25:53.069 256155 INFO nova.compute.manager [-] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] VM Stopped (Lifecycle Event)#033[00m
Dec  1 05:25:53 np0005540825 nova_compute[256151]: 2025-12-01 10:25:53.088 256155 DEBUG nova.compute.manager [None req-13d728fb-d8b0-41f2-a415-cfd86e35fcb4 - - - - - -] [instance: dd56af67-ae91-4891-b152-ac9a0f325fc5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 05:25:53 np0005540825 nova_compute[256151]: 2025-12-01 10:25:53.147 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:25:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:25:53.713Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:25:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:25:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:25:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:25:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:54 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:25:54 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1059: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:25:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:25:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:25:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:25:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:25:54.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:25:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:25:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:25:54.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:25:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:25:56 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1060: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:25:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:25:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:25:56.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:25:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:25:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:25:56.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:25:57 np0005540825 nova_compute[256151]: 2025-12-01 10:25:57.045 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:25:57 np0005540825 nova_compute[256151]: 2025-12-01 10:25:57.123 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:25:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:25:57.270Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:25:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:25:57.270Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:25:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:25:57.270Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:25:58 np0005540825 nova_compute[256151]: 2025-12-01 10:25:58.026 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:25:58 np0005540825 nova_compute[256151]: 2025-12-01 10:25:58.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:25:58 np0005540825 nova_compute[256151]: 2025-12-01 10:25:58.150 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:25:58 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1061: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:25:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:25:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:25:58.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:25:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:25:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:25:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:25:58.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:25:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:25:58.849Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:25:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:25:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:25:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:25:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:25:59 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:26:00 np0005540825 nova_compute[256151]: 2025-12-01 10:26:00.024 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:26:00 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1062: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:26:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:26:00.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:26:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:26:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:26:00.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:26:01 np0005540825 nova_compute[256151]: 2025-12-01 10:26:01.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:26:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:26:01] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec  1 05:26:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:26:01] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec  1 05:26:02 np0005540825 nova_compute[256151]: 2025-12-01 10:26:02.168 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:26:02 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1063: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:26:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:26:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:26:02.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:26:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:26:02.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:03 np0005540825 nova_compute[256151]: 2025-12-01 10:26:03.153 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:26:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:26:03.714Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:26:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:26:03.714Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:26:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:26:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:26:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:26:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:04 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:26:04 np0005540825 nova_compute[256151]: 2025-12-01 10:26:04.026 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:26:04 np0005540825 nova_compute[256151]: 2025-12-01 10:26:04.048 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:26:04 np0005540825 nova_compute[256151]: 2025-12-01 10:26:04.049 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:26:04 np0005540825 nova_compute[256151]: 2025-12-01 10:26:04.049 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:26:04 np0005540825 nova_compute[256151]: 2025-12-01 10:26:04.049 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 05:26:04 np0005540825 nova_compute[256151]: 2025-12-01 10:26:04.050 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:26:04 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1064: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:26:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:26:04 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1239419402' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:26:04 np0005540825 nova_compute[256151]: 2025-12-01 10:26:04.539 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:26:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:26:04.582 163291 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:26:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:26:04.583 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:26:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:26:04.583 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:26:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:26:04.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:26:04.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:04 np0005540825 podman[277929]: 2025-12-01 10:26:04.702940249 +0000 UTC m=+0.102105534 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  1 05:26:04 np0005540825 nova_compute[256151]: 2025-12-01 10:26:04.786 256155 WARNING nova.virt.libvirt.driver [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 05:26:04 np0005540825 nova_compute[256151]: 2025-12-01 10:26:04.787 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4577MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 05:26:04 np0005540825 nova_compute[256151]: 2025-12-01 10:26:04.787 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:26:04 np0005540825 nova_compute[256151]: 2025-12-01 10:26:04.788 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:26:04 np0005540825 nova_compute[256151]: 2025-12-01 10:26:04.849 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 05:26:04 np0005540825 nova_compute[256151]: 2025-12-01 10:26:04.850 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 05:26:04 np0005540825 nova_compute[256151]: 2025-12-01 10:26:04.869 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Refreshing inventories for resource provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  1 05:26:04 np0005540825 nova_compute[256151]: 2025-12-01 10:26:04.904 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Updating ProviderTree inventory for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  1 05:26:04 np0005540825 nova_compute[256151]: 2025-12-01 10:26:04.905 256155 DEBUG nova.compute.provider_tree [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Updating inventory in ProviderTree for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 05:26:04 np0005540825 nova_compute[256151]: 2025-12-01 10:26:04.921 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Refreshing aggregate associations for resource provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  1 05:26:04 np0005540825 nova_compute[256151]: 2025-12-01 10:26:04.945 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Refreshing trait associations for resource provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae, traits: HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE4A,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_MMX,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_BMI,HW_CPU_X86_SVM,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AMD_SVM,HW_CPU_X86_BMI2,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE42,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,HW_CPU_X86_ABM,COMPUTE_SECURITY_UEFI_SECURE_BOOT _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  1 05:26:04 np0005540825 nova_compute[256151]: 2025-12-01 10:26:04.974 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:26:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:26:05 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3729794993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:26:05 np0005540825 nova_compute[256151]: 2025-12-01 10:26:05.428 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:26:05 np0005540825 nova_compute[256151]: 2025-12-01 10:26:05.437 256155 DEBUG nova.compute.provider_tree [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed in ProviderTree for provider: 5efe20fe-1981-4bd9-8786-d9fddc89a5ae update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 05:26:05 np0005540825 nova_compute[256151]: 2025-12-01 10:26:05.453 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 05:26:05 np0005540825 nova_compute[256151]: 2025-12-01 10:26:05.477 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 05:26:05 np0005540825 nova_compute[256151]: 2025-12-01 10:26:05.478 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:26:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:26:06 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1065: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  1 05:26:06 np0005540825 nova_compute[256151]: 2025-12-01 10:26:06.479 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:26:06 np0005540825 nova_compute[256151]: 2025-12-01 10:26:06.480 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:26:06 np0005540825 nova_compute[256151]: 2025-12-01 10:26:06.480 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 05:26:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:26:06.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:26:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:26:06.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:26:07 np0005540825 nova_compute[256151]: 2025-12-01 10:26:07.169 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:26:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:26:07.271Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:26:08 np0005540825 nova_compute[256151]: 2025-12-01 10:26:08.155 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:26:08 np0005540825 podman[278000]: 2025-12-01 10:26:08.224606299 +0000 UTC m=+0.088828698 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0)
Dec  1 05:26:08 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1066: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  1 05:26:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:26:08.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:26:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:26:08.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:26:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:26:08.850Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:26:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:26:08.851Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:26:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:26:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:09 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:26:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:09 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:26:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:09 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:26:09 np0005540825 nova_compute[256151]: 2025-12-01 10:26:09.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:26:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:26:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:26:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:26:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:26:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:26:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:26:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:26:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:26:10 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1067: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  1 05:26:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:26:10.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:26:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:26:10.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:26:11] "GET /metrics HTTP/1.1" 200 48563 "" "Prometheus/2.51.0"
Dec  1 05:26:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:26:11] "GET /metrics HTTP/1.1" 200 48563 "" "Prometheus/2.51.0"
Dec  1 05:26:11 np0005540825 podman[278151]: 2025-12-01 10:26:11.516778081 +0000 UTC m=+0.085019426 container exec 04e54403a63b389bbbec1024baf07fe083822adf8debfd9c927a52a06f70b8a1 (image=quay.io/ceph/ceph:v19, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  1 05:26:11 np0005540825 podman[278151]: 2025-12-01 10:26:11.62876176 +0000 UTC m=+0.197003085 container exec_died 04e54403a63b389bbbec1024baf07fe083822adf8debfd9c927a52a06f70b8a1 (image=quay.io/ceph/ceph:v19, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mon-compute-0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:26:12 np0005540825 nova_compute[256151]: 2025-12-01 10:26:12.172 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:26:12 np0005540825 podman[278274]: 2025-12-01 10:26:12.21301992 +0000 UTC m=+0.087821201 container exec 6f6cf01cf4add71c311676e9908aca30b90b94b7eb4eed46b57a6078721d520f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 05:26:12 np0005540825 podman[278274]: 2025-12-01 10:26:12.222483334 +0000 UTC m=+0.097284565 container exec_died 6f6cf01cf4add71c311676e9908aca30b90b94b7eb4eed46b57a6078721d520f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 05:26:12 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1068: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Dec  1 05:26:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:26:12.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:26:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:26:12.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:26:12 np0005540825 podman[278367]: 2025-12-01 10:26:12.662526288 +0000 UTC m=+0.075897360 container exec 7a97e5c792e90c0e9beef244d64f90b782f45501ef79e0290396630e04fbacec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  1 05:26:12 np0005540825 podman[278367]: 2025-12-01 10:26:12.683724498 +0000 UTC m=+0.097095540 container exec_died 7a97e5c792e90c0e9beef244d64f90b782f45501ef79e0290396630e04fbacec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:26:13 np0005540825 podman[278434]: 2025-12-01 10:26:13.025948764 +0000 UTC m=+0.096012781 container exec 0ce6b28b78cdc773acbae8987038033199adf9f2d08be5b101f663b41bdbf569 (image=quay.io/ceph/haproxy:2.3, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd)
Dec  1 05:26:13 np0005540825 podman[278434]: 2025-12-01 10:26:13.037974947 +0000 UTC m=+0.108038954 container exec_died 0ce6b28b78cdc773acbae8987038033199adf9f2d08be5b101f663b41bdbf569 (image=quay.io/ceph/haproxy:2.3, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd)
Dec  1 05:26:13 np0005540825 nova_compute[256151]: 2025-12-01 10:26:13.157 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:26:13 np0005540825 podman[278501]: 2025-12-01 10:26:13.375791184 +0000 UTC m=+0.080762851 container exec a5bc912f6140365e8fac95a046d1f1cd854ca55aaf2d1e10454f7fa95d0346ac (image=quay.io/ceph/keepalived:2.2.4, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-nfs-cephfs-compute-0-gzwexr, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, version=2.2.4, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, architecture=x86_64, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.28.2)
Dec  1 05:26:13 np0005540825 podman[278501]: 2025-12-01 10:26:13.399685676 +0000 UTC m=+0.104657313 container exec_died a5bc912f6140365e8fac95a046d1f1cd854ca55aaf2d1e10454f7fa95d0346ac (image=quay.io/ceph/keepalived:2.2.4, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-nfs-cephfs-compute-0-gzwexr, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, description=keepalived for Ceph, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, version=2.2.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, name=keepalived)
Dec  1 05:26:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:26:13.715Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:26:13 np0005540825 podman[278572]: 2025-12-01 10:26:13.810818204 +0000 UTC m=+0.084878722 container exec fa43ac72a8a6a2863fa517cbc53fe118714aa74f1d9b620c1e40de173c893c3c (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 05:26:13 np0005540825 podman[278572]: 2025-12-01 10:26:13.841102718 +0000 UTC m=+0.115163206 container exec_died fa43ac72a8a6a2863fa517cbc53fe118714aa74f1d9b620c1e40de173c893c3c (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 05:26:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:26:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:26:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:14 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:26:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:14 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:26:14 np0005540825 podman[278647]: 2025-12-01 10:26:14.130338619 +0000 UTC m=+0.061769941 container exec 2e1200771a4f85a610f0f173c3c2000346e63d85e37d815d1d1db9886b52c917 (image=quay.io/ceph/grafana:10.4.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 05:26:14 np0005540825 podman[278647]: 2025-12-01 10:26:14.337367622 +0000 UTC m=+0.268798974 container exec_died 2e1200771a4f85a610f0f173c3c2000346e63d85e37d815d1d1db9886b52c917 (image=quay.io/ceph/grafana:10.4.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 05:26:14 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1069: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Dec  1 05:26:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:26:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:26:14.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:26:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:26:14.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:14 np0005540825 podman[278757]: 2025-12-01 10:26:14.804142054 +0000 UTC m=+0.067482704 container exec f4d1dfb280c04c299aa8be4743fa19bf2fe3a6e302067b3bdeba477b91d1a552 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 05:26:14 np0005540825 podman[278757]: 2025-12-01 10:26:14.842724191 +0000 UTC m=+0.106064811 container exec_died f4d1dfb280c04c299aa8be4743fa19bf2fe3a6e302067b3bdeba477b91d1a552 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 05:26:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 05:26:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:26:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 05:26:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:26:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:26:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:26:15 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:26:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 05:26:15 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:26:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 05:26:15 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1070: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 107 op/s
Dec  1 05:26:15 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:26:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 05:26:15 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:26:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 05:26:15 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 05:26:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 05:26:15 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:26:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:26:15 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:26:15 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:26:15 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:26:15 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:26:15 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:26:15 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:26:15 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:26:16 np0005540825 podman[278974]: 2025-12-01 10:26:16.312853415 +0000 UTC m=+0.056350235 container create 2fbdb72215e0306b36dae5faa8ba105ca5c1a085dca352ddf2b362e832146d40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_merkle, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  1 05:26:16 np0005540825 systemd[1]: Started libpod-conmon-2fbdb72215e0306b36dae5faa8ba105ca5c1a085dca352ddf2b362e832146d40.scope.
Dec  1 05:26:16 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:26:16 np0005540825 podman[278974]: 2025-12-01 10:26:16.295585611 +0000 UTC m=+0.039082461 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:26:16 np0005540825 podman[278974]: 2025-12-01 10:26:16.405615078 +0000 UTC m=+0.149111988 container init 2fbdb72215e0306b36dae5faa8ba105ca5c1a085dca352ddf2b362e832146d40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  1 05:26:16 np0005540825 podman[278974]: 2025-12-01 10:26:16.416585753 +0000 UTC m=+0.160082603 container start 2fbdb72215e0306b36dae5faa8ba105ca5c1a085dca352ddf2b362e832146d40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_merkle, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:26:16 np0005540825 podman[278974]: 2025-12-01 10:26:16.420728174 +0000 UTC m=+0.164225074 container attach 2fbdb72215e0306b36dae5faa8ba105ca5c1a085dca352ddf2b362e832146d40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_merkle, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  1 05:26:16 np0005540825 funny_merkle[278990]: 167 167
Dec  1 05:26:16 np0005540825 systemd[1]: libpod-2fbdb72215e0306b36dae5faa8ba105ca5c1a085dca352ddf2b362e832146d40.scope: Deactivated successfully.
Dec  1 05:26:16 np0005540825 podman[278974]: 2025-12-01 10:26:16.424814614 +0000 UTC m=+0.168311464 container died 2fbdb72215e0306b36dae5faa8ba105ca5c1a085dca352ddf2b362e832146d40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_merkle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  1 05:26:16 np0005540825 systemd[1]: var-lib-containers-storage-overlay-7ec334070650649551949626c8cc2ff3dc918d642a87b66a6d773eceb8a5ea55-merged.mount: Deactivated successfully.
Dec  1 05:26:16 np0005540825 podman[278974]: 2025-12-01 10:26:16.473929824 +0000 UTC m=+0.217426684 container remove 2fbdb72215e0306b36dae5faa8ba105ca5c1a085dca352ddf2b362e832146d40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:26:16 np0005540825 systemd[1]: libpod-conmon-2fbdb72215e0306b36dae5faa8ba105ca5c1a085dca352ddf2b362e832146d40.scope: Deactivated successfully.
Dec  1 05:26:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:26:16.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:26:16.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:16 np0005540825 podman[279014]: 2025-12-01 10:26:16.729989414 +0000 UTC m=+0.070954557 container create 15423fafdffc6a55570303da3b7fb1e794ce99211fc8ef76b07af718e47c7412 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_joliot, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec  1 05:26:16 np0005540825 systemd[1]: Started libpod-conmon-15423fafdffc6a55570303da3b7fb1e794ce99211fc8ef76b07af718e47c7412.scope.
Dec  1 05:26:16 np0005540825 podman[279014]: 2025-12-01 10:26:16.700153632 +0000 UTC m=+0.041118815 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:26:16 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:26:16 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4bc568a4882a7c38c34bea03b6e0030c9555e7081b3eaf18a7ea6a0b10f4825/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:26:16 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4bc568a4882a7c38c34bea03b6e0030c9555e7081b3eaf18a7ea6a0b10f4825/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:26:16 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4bc568a4882a7c38c34bea03b6e0030c9555e7081b3eaf18a7ea6a0b10f4825/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:26:16 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4bc568a4882a7c38c34bea03b6e0030c9555e7081b3eaf18a7ea6a0b10f4825/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:26:16 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4bc568a4882a7c38c34bea03b6e0030c9555e7081b3eaf18a7ea6a0b10f4825/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:26:16 np0005540825 podman[279014]: 2025-12-01 10:26:16.833605839 +0000 UTC m=+0.174570962 container init 15423fafdffc6a55570303da3b7fb1e794ce99211fc8ef76b07af718e47c7412 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:26:16 np0005540825 podman[279014]: 2025-12-01 10:26:16.845432376 +0000 UTC m=+0.186397479 container start 15423fafdffc6a55570303da3b7fb1e794ce99211fc8ef76b07af718e47c7412 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_joliot, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Dec  1 05:26:16 np0005540825 podman[279014]: 2025-12-01 10:26:16.848846228 +0000 UTC m=+0.189811321 container attach 15423fafdffc6a55570303da3b7fb1e794ce99211fc8ef76b07af718e47c7412 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:26:17 np0005540825 hungry_joliot[279030]: --> passed data devices: 0 physical, 1 LVM
Dec  1 05:26:17 np0005540825 hungry_joliot[279030]: --> All data devices are unavailable
Dec  1 05:26:17 np0005540825 nova_compute[256151]: 2025-12-01 10:26:17.230 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:26:17 np0005540825 systemd[1]: libpod-15423fafdffc6a55570303da3b7fb1e794ce99211fc8ef76b07af718e47c7412.scope: Deactivated successfully.
Dec  1 05:26:17 np0005540825 podman[279014]: 2025-12-01 10:26:17.239614118 +0000 UTC m=+0.580579221 container died 15423fafdffc6a55570303da3b7fb1e794ce99211fc8ef76b07af718e47c7412 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 05:26:17 np0005540825 systemd[1]: var-lib-containers-storage-overlay-d4bc568a4882a7c38c34bea03b6e0030c9555e7081b3eaf18a7ea6a0b10f4825-merged.mount: Deactivated successfully.
Dec  1 05:26:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:26:17.273Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:26:17 np0005540825 podman[279014]: 2025-12-01 10:26:17.280861767 +0000 UTC m=+0.621826880 container remove 15423fafdffc6a55570303da3b7fb1e794ce99211fc8ef76b07af718e47c7412 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_joliot, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:26:17 np0005540825 systemd[1]: libpod-conmon-15423fafdffc6a55570303da3b7fb1e794ce99211fc8ef76b07af718e47c7412.scope: Deactivated successfully.
Dec  1 05:26:17 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1071: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 78 op/s
Dec  1 05:26:17 np0005540825 podman[279151]: 2025-12-01 10:26:17.914484292 +0000 UTC m=+0.060711063 container create 02efcbb54aada63299000f7da337c273928008b21bb7e045129ca86830bf4adf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_morse, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:26:17 np0005540825 systemd[1]: Started libpod-conmon-02efcbb54aada63299000f7da337c273928008b21bb7e045129ca86830bf4adf.scope.
Dec  1 05:26:17 np0005540825 podman[279151]: 2025-12-01 10:26:17.893999181 +0000 UTC m=+0.040225992 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:26:17 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:26:18 np0005540825 podman[279151]: 2025-12-01 10:26:18.00297911 +0000 UTC m=+0.149205921 container init 02efcbb54aada63299000f7da337c273928008b21bb7e045129ca86830bf4adf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_morse, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  1 05:26:18 np0005540825 podman[279151]: 2025-12-01 10:26:18.01228476 +0000 UTC m=+0.158511531 container start 02efcbb54aada63299000f7da337c273928008b21bb7e045129ca86830bf4adf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_morse, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  1 05:26:18 np0005540825 podman[279151]: 2025-12-01 10:26:18.015483046 +0000 UTC m=+0.161709857 container attach 02efcbb54aada63299000f7da337c273928008b21bb7e045129ca86830bf4adf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:26:18 np0005540825 jovial_morse[279168]: 167 167
Dec  1 05:26:18 np0005540825 systemd[1]: libpod-02efcbb54aada63299000f7da337c273928008b21bb7e045129ca86830bf4adf.scope: Deactivated successfully.
Dec  1 05:26:18 np0005540825 podman[279151]: 2025-12-01 10:26:18.017414238 +0000 UTC m=+0.163641039 container died 02efcbb54aada63299000f7da337c273928008b21bb7e045129ca86830bf4adf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  1 05:26:18 np0005540825 systemd[1]: var-lib-containers-storage-overlay-26a68275383595340e932b301c8a9504c6e07a160b4ef7f274311341c0a656b6-merged.mount: Deactivated successfully.
Dec  1 05:26:18 np0005540825 podman[279151]: 2025-12-01 10:26:18.065992743 +0000 UTC m=+0.212219524 container remove 02efcbb54aada63299000f7da337c273928008b21bb7e045129ca86830bf4adf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_morse, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:26:18 np0005540825 systemd[1]: libpod-conmon-02efcbb54aada63299000f7da337c273928008b21bb7e045129ca86830bf4adf.scope: Deactivated successfully.
Dec  1 05:26:18 np0005540825 nova_compute[256151]: 2025-12-01 10:26:18.159 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:26:18 np0005540825 podman[279193]: 2025-12-01 10:26:18.318989071 +0000 UTC m=+0.068270865 container create 0291dc8479b048a8c71496a4e1a2ab964d4f5ac6f5d64f6e9641261ab0281271 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  1 05:26:18 np0005540825 systemd[1]: Started libpod-conmon-0291dc8479b048a8c71496a4e1a2ab964d4f5ac6f5d64f6e9641261ab0281271.scope.
Dec  1 05:26:18 np0005540825 podman[279193]: 2025-12-01 10:26:18.289351085 +0000 UTC m=+0.038632979 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:26:18 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:26:18 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7ddf18a383bb69f649107ba0cdb723e44fbd2c8d67fc25311815b0a7022ec30/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:26:18 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7ddf18a383bb69f649107ba0cdb723e44fbd2c8d67fc25311815b0a7022ec30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:26:18 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7ddf18a383bb69f649107ba0cdb723e44fbd2c8d67fc25311815b0a7022ec30/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:26:18 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7ddf18a383bb69f649107ba0cdb723e44fbd2c8d67fc25311815b0a7022ec30/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:26:18 np0005540825 podman[279193]: 2025-12-01 10:26:18.419130762 +0000 UTC m=+0.168412646 container init 0291dc8479b048a8c71496a4e1a2ab964d4f5ac6f5d64f6e9641261ab0281271 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_lewin, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True)
Dec  1 05:26:18 np0005540825 podman[279193]: 2025-12-01 10:26:18.438059191 +0000 UTC m=+0.187341025 container start 0291dc8479b048a8c71496a4e1a2ab964d4f5ac6f5d64f6e9641261ab0281271 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_lewin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:26:18 np0005540825 podman[279193]: 2025-12-01 10:26:18.445538282 +0000 UTC m=+0.194820116 container attach 0291dc8479b048a8c71496a4e1a2ab964d4f5ac6f5d64f6e9641261ab0281271 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec  1 05:26:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:26:18.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:26:18.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:18 np0005540825 keen_lewin[279210]: {
Dec  1 05:26:18 np0005540825 keen_lewin[279210]:    "1": [
Dec  1 05:26:18 np0005540825 keen_lewin[279210]:        {
Dec  1 05:26:18 np0005540825 keen_lewin[279210]:            "devices": [
Dec  1 05:26:18 np0005540825 keen_lewin[279210]:                "/dev/loop3"
Dec  1 05:26:18 np0005540825 keen_lewin[279210]:            ],
Dec  1 05:26:18 np0005540825 keen_lewin[279210]:            "lv_name": "ceph_lv0",
Dec  1 05:26:18 np0005540825 keen_lewin[279210]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:26:18 np0005540825 keen_lewin[279210]:            "lv_size": "21470642176",
Dec  1 05:26:18 np0005540825 keen_lewin[279210]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 05:26:18 np0005540825 keen_lewin[279210]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:26:18 np0005540825 keen_lewin[279210]:            "name": "ceph_lv0",
Dec  1 05:26:18 np0005540825 keen_lewin[279210]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:26:18 np0005540825 keen_lewin[279210]:            "tags": {
Dec  1 05:26:18 np0005540825 keen_lewin[279210]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:26:18 np0005540825 keen_lewin[279210]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:26:18 np0005540825 keen_lewin[279210]:                "ceph.cephx_lockbox_secret": "",
Dec  1 05:26:18 np0005540825 keen_lewin[279210]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 05:26:18 np0005540825 keen_lewin[279210]:                "ceph.cluster_name": "ceph",
Dec  1 05:26:18 np0005540825 keen_lewin[279210]:                "ceph.crush_device_class": "",
Dec  1 05:26:18 np0005540825 keen_lewin[279210]:                "ceph.encrypted": "0",
Dec  1 05:26:18 np0005540825 keen_lewin[279210]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 05:26:18 np0005540825 keen_lewin[279210]:                "ceph.osd_id": "1",
Dec  1 05:26:18 np0005540825 keen_lewin[279210]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 05:26:18 np0005540825 keen_lewin[279210]:                "ceph.type": "block",
Dec  1 05:26:18 np0005540825 keen_lewin[279210]:                "ceph.vdo": "0",
Dec  1 05:26:18 np0005540825 keen_lewin[279210]:                "ceph.with_tpm": "0"
Dec  1 05:26:18 np0005540825 keen_lewin[279210]:            },
Dec  1 05:26:18 np0005540825 keen_lewin[279210]:            "type": "block",
Dec  1 05:26:18 np0005540825 keen_lewin[279210]:            "vg_name": "ceph_vg0"
Dec  1 05:26:18 np0005540825 keen_lewin[279210]:        }
Dec  1 05:26:18 np0005540825 keen_lewin[279210]:    ]
Dec  1 05:26:18 np0005540825 keen_lewin[279210]: }
Dec  1 05:26:18 np0005540825 systemd[1]: libpod-0291dc8479b048a8c71496a4e1a2ab964d4f5ac6f5d64f6e9641261ab0281271.scope: Deactivated successfully.
Dec  1 05:26:18 np0005540825 podman[279193]: 2025-12-01 10:26:18.79156855 +0000 UTC m=+0.540850424 container died 0291dc8479b048a8c71496a4e1a2ab964d4f5ac6f5d64f6e9641261ab0281271 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  1 05:26:18 np0005540825 systemd[1]: var-lib-containers-storage-overlay-c7ddf18a383bb69f649107ba0cdb723e44fbd2c8d67fc25311815b0a7022ec30-merged.mount: Deactivated successfully.
Dec  1 05:26:18 np0005540825 podman[279193]: 2025-12-01 10:26:18.844831281 +0000 UTC m=+0.594113115 container remove 0291dc8479b048a8c71496a4e1a2ab964d4f5ac6f5d64f6e9641261ab0281271 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:26:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:26:18.851Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:26:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:26:18.851Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:26:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:26:18.853Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:26:18 np0005540825 systemd[1]: libpod-conmon-0291dc8479b048a8c71496a4e1a2ab964d4f5ac6f5d64f6e9641261ab0281271.scope: Deactivated successfully.
Dec  1 05:26:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:26:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:26:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:26:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:19 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:26:19 np0005540825 podman[279321]: 2025-12-01 10:26:19.505666058 +0000 UTC m=+0.058070691 container create 3862ba26eb8ace4e2f31e4df913b46685c71b70284a91d5167b3f699f6847b08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_cori, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:26:19 np0005540825 systemd[1]: Started libpod-conmon-3862ba26eb8ace4e2f31e4df913b46685c71b70284a91d5167b3f699f6847b08.scope.
Dec  1 05:26:19 np0005540825 podman[279321]: 2025-12-01 10:26:19.475982941 +0000 UTC m=+0.028387614 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:26:19 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:26:19 np0005540825 podman[279321]: 2025-12-01 10:26:19.609823967 +0000 UTC m=+0.162228640 container init 3862ba26eb8ace4e2f31e4df913b46685c71b70284a91d5167b3f699f6847b08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec  1 05:26:19 np0005540825 podman[279321]: 2025-12-01 10:26:19.619205049 +0000 UTC m=+0.171609682 container start 3862ba26eb8ace4e2f31e4df913b46685c71b70284a91d5167b3f699f6847b08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_cori, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  1 05:26:19 np0005540825 podman[279321]: 2025-12-01 10:26:19.62481092 +0000 UTC m=+0.177215593 container attach 3862ba26eb8ace4e2f31e4df913b46685c71b70284a91d5167b3f699f6847b08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_cori, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Dec  1 05:26:19 np0005540825 compassionate_cori[279337]: 167 167
Dec  1 05:26:19 np0005540825 systemd[1]: libpod-3862ba26eb8ace4e2f31e4df913b46685c71b70284a91d5167b3f699f6847b08.scope: Deactivated successfully.
Dec  1 05:26:19 np0005540825 conmon[279337]: conmon 3862ba26eb8ace4e2f31 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3862ba26eb8ace4e2f31e4df913b46685c71b70284a91d5167b3f699f6847b08.scope/container/memory.events
Dec  1 05:26:19 np0005540825 podman[279321]: 2025-12-01 10:26:19.628842178 +0000 UTC m=+0.181246841 container died 3862ba26eb8ace4e2f31e4df913b46685c71b70284a91d5167b3f699f6847b08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_cori, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec  1 05:26:19 np0005540825 systemd[1]: var-lib-containers-storage-overlay-a7d09280e2d4d3b564a42da2724acd674acacf95c57ade9cb230412aff170ffd-merged.mount: Deactivated successfully.
Dec  1 05:26:19 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1072: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 78 op/s
Dec  1 05:26:19 np0005540825 podman[279321]: 2025-12-01 10:26:19.677501896 +0000 UTC m=+0.229906499 container remove 3862ba26eb8ace4e2f31e4df913b46685c71b70284a91d5167b3f699f6847b08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_cori, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  1 05:26:19 np0005540825 systemd[1]: libpod-conmon-3862ba26eb8ace4e2f31e4df913b46685c71b70284a91d5167b3f699f6847b08.scope: Deactivated successfully.
Dec  1 05:26:19 np0005540825 podman[279363]: 2025-12-01 10:26:19.84512619 +0000 UTC m=+0.028365483 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:26:20 np0005540825 podman[279363]: 2025-12-01 10:26:20.020955275 +0000 UTC m=+0.204194518 container create 97091bdd3c6679d4610737355b272c8cfc088d430794d77ee4f8faa05e6d8e46 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_hopper, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:26:20 np0005540825 systemd[1]: Started libpod-conmon-97091bdd3c6679d4610737355b272c8cfc088d430794d77ee4f8faa05e6d8e46.scope.
Dec  1 05:26:20 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:26:20 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bc7ea1c0060859ea520fde6b3b2ecfbfc85456468effc067fd0ddede3d3123c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:26:20 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bc7ea1c0060859ea520fde6b3b2ecfbfc85456468effc067fd0ddede3d3123c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:26:20 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bc7ea1c0060859ea520fde6b3b2ecfbfc85456468effc067fd0ddede3d3123c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:26:20 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bc7ea1c0060859ea520fde6b3b2ecfbfc85456468effc067fd0ddede3d3123c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:26:20 np0005540825 podman[279363]: 2025-12-01 10:26:20.192881835 +0000 UTC m=+0.376121138 container init 97091bdd3c6679d4610737355b272c8cfc088d430794d77ee4f8faa05e6d8e46 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_hopper, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  1 05:26:20 np0005540825 podman[279363]: 2025-12-01 10:26:20.208432402 +0000 UTC m=+0.391671655 container start 97091bdd3c6679d4610737355b272c8cfc088d430794d77ee4f8faa05e6d8e46 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_hopper, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  1 05:26:20 np0005540825 podman[279363]: 2025-12-01 10:26:20.212100181 +0000 UTC m=+0.395339394 container attach 97091bdd3c6679d4610737355b272c8cfc088d430794d77ee4f8faa05e6d8e46 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_hopper, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:26:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:26:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:26:20.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:26:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:26:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:26:20.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:20 np0005540825 lvm[279453]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 05:26:20 np0005540825 lvm[279453]: VG ceph_vg0 finished
Dec  1 05:26:21 np0005540825 sweet_hopper[279379]: {}
Dec  1 05:26:21 np0005540825 systemd[1]: libpod-97091bdd3c6679d4610737355b272c8cfc088d430794d77ee4f8faa05e6d8e46.scope: Deactivated successfully.
Dec  1 05:26:21 np0005540825 podman[279363]: 2025-12-01 10:26:21.067032744 +0000 UTC m=+1.250271997 container died 97091bdd3c6679d4610737355b272c8cfc088d430794d77ee4f8faa05e6d8e46 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_hopper, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec  1 05:26:21 np0005540825 systemd[1]: libpod-97091bdd3c6679d4610737355b272c8cfc088d430794d77ee4f8faa05e6d8e46.scope: Consumed 1.535s CPU time.
Dec  1 05:26:21 np0005540825 systemd[1]: var-lib-containers-storage-overlay-2bc7ea1c0060859ea520fde6b3b2ecfbfc85456468effc067fd0ddede3d3123c-merged.mount: Deactivated successfully.
Dec  1 05:26:21 np0005540825 podman[279363]: 2025-12-01 10:26:21.125778342 +0000 UTC m=+1.309017595 container remove 97091bdd3c6679d4610737355b272c8cfc088d430794d77ee4f8faa05e6d8e46 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec  1 05:26:21 np0005540825 systemd[1]: libpod-conmon-97091bdd3c6679d4610737355b272c8cfc088d430794d77ee4f8faa05e6d8e46.scope: Deactivated successfully.
Dec  1 05:26:21 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 05:26:21 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:26:21 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 05:26:21 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:26:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:26:21] "GET /metrics HTTP/1.1" 200 48563 "" "Prometheus/2.51.0"
Dec  1 05:26:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:26:21] "GET /metrics HTTP/1.1" 200 48563 "" "Prometheus/2.51.0"
Dec  1 05:26:21 np0005540825 podman[279493]: 2025-12-01 10:26:21.482271381 +0000 UTC m=+0.136716395 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Dec  1 05:26:21 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1073: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 78 op/s
Dec  1 05:26:22 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:26:22 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:26:22 np0005540825 nova_compute[256151]: 2025-12-01 10:26:22.280 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:26:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:26:22.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:26:22.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:23 np0005540825 nova_compute[256151]: 2025-12-01 10:26:23.163 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:26:23 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1074: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 68 op/s
Dec  1 05:26:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:26:23.717Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:26:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:26:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:26:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:26:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:24 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:26:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:26:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:26:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:26:24.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:26:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:26:24.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:26:25 np0005540825 ovn_controller[153404]: 2025-12-01T10:26:25Z|00089|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Dec  1 05:26:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:26:25 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1075: 353 pgs: 353 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.3 MiB/s wr, 135 op/s
Dec  1 05:26:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:26:26.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:26:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:26:26.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:26:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:26:27.275Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:26:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:26:27.275Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:26:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:26:27.275Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:26:27 np0005540825 nova_compute[256151]: 2025-12-01 10:26:27.315 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:26:27 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Dec  1 05:26:27 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:26:27.494163) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  1 05:26:27 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Dec  1 05:26:27 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584787494199, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2130, "num_deletes": 251, "total_data_size": 4180117, "memory_usage": 4249416, "flush_reason": "Manual Compaction"}
Dec  1 05:26:27 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Dec  1 05:26:27 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584787517150, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 4044953, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29623, "largest_seqno": 31752, "table_properties": {"data_size": 4035372, "index_size": 6011, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20031, "raw_average_key_size": 20, "raw_value_size": 4016184, "raw_average_value_size": 4110, "num_data_blocks": 258, "num_entries": 977, "num_filter_entries": 977, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764584581, "oldest_key_time": 1764584581, "file_creation_time": 1764584787, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Dec  1 05:26:27 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 23063 microseconds, and 8616 cpu microseconds.
Dec  1 05:26:27 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 05:26:27 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:26:27.517218) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 4044953 bytes OK
Dec  1 05:26:27 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:26:27.517244) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Dec  1 05:26:27 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:26:27.519086) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Dec  1 05:26:27 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:26:27.519107) EVENT_LOG_v1 {"time_micros": 1764584787519100, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  1 05:26:27 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:26:27.519128) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  1 05:26:27 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 4171484, prev total WAL file size 4171484, number of live WAL files 2.
Dec  1 05:26:27 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:26:27 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:26:27.521006) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Dec  1 05:26:27 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  1 05:26:27 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(3950KB)], [65(12MB)]
Dec  1 05:26:27 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584787521059, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 16801381, "oldest_snapshot_seqno": -1}
Dec  1 05:26:27 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6270 keys, 14621890 bytes, temperature: kUnknown
Dec  1 05:26:27 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584787592119, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 14621890, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14580368, "index_size": 24772, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15685, "raw_key_size": 160490, "raw_average_key_size": 25, "raw_value_size": 14467765, "raw_average_value_size": 2307, "num_data_blocks": 995, "num_entries": 6270, "num_filter_entries": 6270, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764582410, "oldest_key_time": 0, "file_creation_time": 1764584787, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Dec  1 05:26:27 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 05:26:27 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:26:27.592568) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 14621890 bytes
Dec  1 05:26:27 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:26:27.594325) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 235.5 rd, 205.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 12.2 +0.0 blob) out(13.9 +0.0 blob), read-write-amplify(7.8) write-amplify(3.6) OK, records in: 6791, records dropped: 521 output_compression: NoCompression
Dec  1 05:26:27 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:26:27.594347) EVENT_LOG_v1 {"time_micros": 1764584787594337, "job": 36, "event": "compaction_finished", "compaction_time_micros": 71335, "compaction_time_cpu_micros": 28182, "output_level": 6, "num_output_files": 1, "total_output_size": 14621890, "num_input_records": 6791, "num_output_records": 6270, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  1 05:26:27 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:26:27 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584787595756, "job": 36, "event": "table_file_deletion", "file_number": 67}
Dec  1 05:26:27 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:26:27 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584787599380, "job": 36, "event": "table_file_deletion", "file_number": 65}
Dec  1 05:26:27 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:26:27.520882) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:26:27 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:26:27.599625) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:26:27 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:26:27.599634) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:26:27 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:26:27.599637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:26:27 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:26:27.599640) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:26:27 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:26:27.599643) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:26:27 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1076: 353 pgs: 353 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec  1 05:26:28 np0005540825 nova_compute[256151]: 2025-12-01 10:26:28.165 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:26:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:26:28.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:26:28.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:26:28.854Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:26:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:26:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:26:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:26:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:29 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:26:29 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1077: 353 pgs: 353 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec  1 05:26:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:26:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:26:30.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:26:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:26:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:26:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:26:30.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:26:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:26:31] "GET /metrics HTTP/1.1" 200 48556 "" "Prometheus/2.51.0"
Dec  1 05:26:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:26:31] "GET /metrics HTTP/1.1" 200 48556 "" "Prometheus/2.51.0"
Dec  1 05:26:31 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1078: 353 pgs: 353 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec  1 05:26:32 np0005540825 nova_compute[256151]: 2025-12-01 10:26:32.370 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:26:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:26:32.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:26:32.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:33 np0005540825 nova_compute[256151]: 2025-12-01 10:26:33.168 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:26:33 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1079: 353 pgs: 353 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec  1 05:26:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:26:33.718Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:26:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:26:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:26:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:26:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:34 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:26:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:26:34.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:26:34.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:34 np0005540825 nova_compute[256151]: 2025-12-01 10:26:34.844 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:26:34 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:26:34.844 163291 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '36:10:da', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '4e:5c:35:98:90:37'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 05:26:34 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:26:34.846 163291 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 05:26:35 np0005540825 podman[279560]: 2025-12-01 10:26:35.215656507 +0000 UTC m=+0.070238908 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  1 05:26:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:26:35 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1080: 353 pgs: 353 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec  1 05:26:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:26:36.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:26:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:26:36.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:26:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:26:37.277Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:26:37 np0005540825 nova_compute[256151]: 2025-12-01 10:26:37.432 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:26:37 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1081: 353 pgs: 353 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 16 KiB/s wr, 1 op/s
Dec  1 05:26:38 np0005540825 nova_compute[256151]: 2025-12-01 10:26:38.170 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:26:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:26:38.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:26:38.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:26:38.855Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:26:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:26:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:26:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:26:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:39 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:26:39 np0005540825 podman[279583]: 2025-12-01 10:26:39.21351633 +0000 UTC m=+0.075409427 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  1 05:26:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_10:26:39
Dec  1 05:26:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 05:26:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 05:26:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['.nfs', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', 'backups', '.rgw.root', 'default.rgw.log', 'images', 'vms']
Dec  1 05:26:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 05:26:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:26:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:26:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:26:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:26:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:26:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:26:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:26:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:26:39 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1082: 353 pgs: 353 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 16 KiB/s wr, 1 op/s
Dec  1 05:26:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 05:26:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:26:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:26:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:26:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007596545956241453 of space, bias 1.0, pg target 0.22789637868724358 quantized to 32 (current 32)
Dec  1 05:26:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:26:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:26:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:26:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:26:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:26:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  1 05:26:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:26:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:26:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:26:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:26:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:26:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:26:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:26:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:26:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:26:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:26:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:26:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:26:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:26:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:26:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 05:26:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:26:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 05:26:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:26:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:26:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:26:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:26:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:26:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:26:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:26:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:26:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:26:40.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:26:40.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:26:41] "GET /metrics HTTP/1.1" 200 48562 "" "Prometheus/2.51.0"
Dec  1 05:26:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:26:41] "GET /metrics HTTP/1.1" 200 48562 "" "Prometheus/2.51.0"
Dec  1 05:26:41 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1083: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 17 KiB/s wr, 29 op/s
Dec  1 05:26:42 np0005540825 nova_compute[256151]: 2025-12-01 10:26:42.482 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:26:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:26:42.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:26:42.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:43 np0005540825 nova_compute[256151]: 2025-12-01 10:26:43.173 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:26:43 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1084: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 28 op/s
Dec  1 05:26:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:26:43.719Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:26:43 np0005540825 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec  1 05:26:43 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:26:43.848 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4d9738cf-2abf-48e2-9303-677669784912, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:26:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:44 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:26:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:44 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:26:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:44 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:26:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:44 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:26:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:26:44.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:26:44.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:26:45 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1085: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.8 KiB/s wr, 28 op/s
Dec  1 05:26:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:26:46.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:26:46.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:26:47.279Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:26:47 np0005540825 nova_compute[256151]: 2025-12-01 10:26:47.484 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:26:47 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1086: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec  1 05:26:48 np0005540825 nova_compute[256151]: 2025-12-01 10:26:48.175 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:26:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:26:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:26:48.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:26:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:26:48.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:26:48.856Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:26:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:26:48.856Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:26:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:49 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:26:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:49 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:26:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:49 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:26:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:49 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:26:49 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1087: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec  1 05:26:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:26:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:26:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:26:50.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:26:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:26:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:26:50.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:26:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:26:51] "GET /metrics HTTP/1.1" 200 48562 "" "Prometheus/2.51.0"
Dec  1 05:26:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:26:51] "GET /metrics HTTP/1.1" 200 48562 "" "Prometheus/2.51.0"
Dec  1 05:26:51 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1088: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec  1 05:26:52 np0005540825 podman[279644]: 2025-12-01 10:26:52.287811383 +0000 UTC m=+0.153452664 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 05:26:52 np0005540825 nova_compute[256151]: 2025-12-01 10:26:52.488 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:26:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:26:52.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:26:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:26:52.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:26:52 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  1 05:26:52 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 7168 writes, 31K keys, 7168 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 7168 writes, 7168 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1596 writes, 6759 keys, 1596 commit groups, 1.0 writes per commit group, ingest: 11.75 MB, 0.02 MB/s#012Interval WAL: 1596 writes, 1596 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     50.4      1.00              0.18        18    0.055       0      0       0.0       0.0#012  L6      1/0   13.94 MB   0.0      0.3     0.0      0.2       0.2      0.0       0.0   4.5     89.6     77.1      2.90              0.72        17    0.171     94K   9378       0.0       0.0#012 Sum      1/0   13.94 MB   0.0      0.3     0.0      0.2       0.3      0.1       0.0   5.5     66.7     70.3      3.90              0.90        35    0.111     94K   9378       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   5.9    128.7    131.2      0.52              0.20         8    0.065     26K   2583       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.0      0.2       0.2      0.0       0.0   0.0     89.6     77.1      2.90              0.72        17    0.171     94K   9378       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     50.6      0.99              0.18        17    0.058       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.049, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.27 GB write, 0.11 MB/s write, 0.25 GB read, 0.11 MB/s read, 3.9 seconds#012Interval compaction: 0.07 GB write, 0.11 MB/s write, 0.07 GB read, 0.11 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563970129350#2 capacity: 304.00 MB usage: 21.76 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000175 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1175,21.02 MB,6.91548%) FilterBlock(36,275.55 KB,0.088516%) IndexBlock(36,476.83 KB,0.153175%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  1 05:26:53 np0005540825 nova_compute[256151]: 2025-12-01 10:26:53.178 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:26:53 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1089: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:26:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:26:53.720Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:26:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:54 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:26:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:54 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:26:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:54 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:26:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:54 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:26:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:26:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:26:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:26:54.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:26:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:26:54.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:26:55 np0005540825 nova_compute[256151]: 2025-12-01 10:26:55.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:26:55 np0005540825 nova_compute[256151]: 2025-12-01 10:26:55.027 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 05:26:55 np0005540825 nova_compute[256151]: 2025-12-01 10:26:55.027 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 05:26:55 np0005540825 nova_compute[256151]: 2025-12-01 10:26:55.045 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 05:26:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:26:55 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1090: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:26:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:26:56.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:26:56.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:57 np0005540825 nova_compute[256151]: 2025-12-01 10:26:57.041 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:26:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:26:57.281Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:26:57 np0005540825 nova_compute[256151]: 2025-12-01 10:26:57.489 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:26:57 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1091: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:26:58 np0005540825 nova_compute[256151]: 2025-12-01 10:26:58.026 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:26:58 np0005540825 nova_compute[256151]: 2025-12-01 10:26:58.218 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:26:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:26:58.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:26:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:26:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:26:58.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:26:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:26:58.857Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:26:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:26:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:26:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:59 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:26:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:26:59 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:26:59 np0005540825 nova_compute[256151]: 2025-12-01 10:26:59.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:26:59 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1092: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:27:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:27:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:27:00.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:27:00.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:01 np0005540825 nova_compute[256151]: 2025-12-01 10:27:01.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:27:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:27:01] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Dec  1 05:27:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:27:01] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Dec  1 05:27:01 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1093: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:27:02 np0005540825 nova_compute[256151]: 2025-12-01 10:27:02.545 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:27:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:27:02.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:27:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:27:02.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:27:03 np0005540825 nova_compute[256151]: 2025-12-01 10:27:03.221 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:27:03 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1094: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:27:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:27:03.721Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:27:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:27:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:27:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:27:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:04 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:27:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:27:04.584 163291 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:27:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:27:04.585 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:27:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:27:04.585 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:27:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:27:04.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:27:04.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:05 np0005540825 nova_compute[256151]: 2025-12-01 10:27:05.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:27:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:27:05 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1095: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:27:06 np0005540825 nova_compute[256151]: 2025-12-01 10:27:06.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:27:06 np0005540825 nova_compute[256151]: 2025-12-01 10:27:06.028 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 05:27:06 np0005540825 nova_compute[256151]: 2025-12-01 10:27:06.028 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:27:06 np0005540825 nova_compute[256151]: 2025-12-01 10:27:06.052 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:27:06 np0005540825 nova_compute[256151]: 2025-12-01 10:27:06.052 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:27:06 np0005540825 nova_compute[256151]: 2025-12-01 10:27:06.052 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:27:06 np0005540825 nova_compute[256151]: 2025-12-01 10:27:06.052 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 05:27:06 np0005540825 nova_compute[256151]: 2025-12-01 10:27:06.053 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:27:06 np0005540825 podman[279686]: 2025-12-01 10:27:06.211279235 +0000 UTC m=+0.070667807 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 05:27:06 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:27:06 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3971070539' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:27:06 np0005540825 nova_compute[256151]: 2025-12-01 10:27:06.527 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:27:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:27:06.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:06 np0005540825 nova_compute[256151]: 2025-12-01 10:27:06.724 256155 WARNING nova.virt.libvirt.driver [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 05:27:06 np0005540825 nova_compute[256151]: 2025-12-01 10:27:06.725 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4595MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 05:27:06 np0005540825 nova_compute[256151]: 2025-12-01 10:27:06.726 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:27:06 np0005540825 nova_compute[256151]: 2025-12-01 10:27:06.726 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:27:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:27:06.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:06 np0005540825 nova_compute[256151]: 2025-12-01 10:27:06.798 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 05:27:06 np0005540825 nova_compute[256151]: 2025-12-01 10:27:06.798 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 05:27:06 np0005540825 nova_compute[256151]: 2025-12-01 10:27:06.816 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:27:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:27:07.282Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:27:07 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:27:07 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/109927610' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:27:07 np0005540825 nova_compute[256151]: 2025-12-01 10:27:07.332 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:27:07 np0005540825 nova_compute[256151]: 2025-12-01 10:27:07.338 256155 DEBUG nova.compute.provider_tree [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed in ProviderTree for provider: 5efe20fe-1981-4bd9-8786-d9fddc89a5ae update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 05:27:07 np0005540825 nova_compute[256151]: 2025-12-01 10:27:07.372 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 05:27:07 np0005540825 nova_compute[256151]: 2025-12-01 10:27:07.373 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 05:27:07 np0005540825 nova_compute[256151]: 2025-12-01 10:27:07.374 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:27:07 np0005540825 nova_compute[256151]: 2025-12-01 10:27:07.545 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:27:07 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1096: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:27:08 np0005540825 nova_compute[256151]: 2025-12-01 10:27:08.223 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:27:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:27:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:27:08.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:27:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:27:08.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:27:08.858Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:27:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:27:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:27:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:27:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:09 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:27:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:27:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:27:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:27:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:27:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:27:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:27:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:27:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:27:09 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1097: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:27:10 np0005540825 podman[279778]: 2025-12-01 10:27:10.205359153 +0000 UTC m=+0.074517190 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Dec  1 05:27:10 np0005540825 nova_compute[256151]: 2025-12-01 10:27:10.374 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:27:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:27:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:27:10.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:27:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:27:10.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:27:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:27:11] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:27:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:27:11] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:27:11 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1098: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:27:12 np0005540825 nova_compute[256151]: 2025-12-01 10:27:12.547 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:27:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:27:12.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:27:12.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:13 np0005540825 nova_compute[256151]: 2025-12-01 10:27:13.225 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:27:13 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1099: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:27:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:27:13.721Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:27:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:27:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:27:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:27:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:14 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:27:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:27:14.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:27:14.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:27:15 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1100: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:27:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:27:16.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:27:16.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:27:17.284Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:27:17 np0005540825 nova_compute[256151]: 2025-12-01 10:27:17.585 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:27:17 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1101: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:27:18 np0005540825 nova_compute[256151]: 2025-12-01 10:27:18.228 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:27:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:27:18.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:27:18.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:27:18.859Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:27:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:19 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:27:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:19 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:27:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:19 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:27:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:19 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:27:19 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1102: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:27:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:27:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:27:20.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:27:20.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:27:21] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:27:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:27:21] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:27:21 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1103: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:27:22 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:27:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:27:22 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 05:27:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:27:22 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1104: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 575 B/s rd, 0 op/s
Dec  1 05:27:22 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 05:27:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:27:22 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 05:27:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:27:22 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 05:27:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 05:27:22 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 05:27:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:27:22 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:27:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:27:22 np0005540825 nova_compute[256151]: 2025-12-01 10:27:22.588 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:27:22 np0005540825 podman[279916]: 2025-12-01 10:27:22.632112654 +0000 UTC m=+0.131169763 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125)
Dec  1 05:27:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:27:22.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:27:22.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:23 np0005540825 podman[280009]: 2025-12-01 10:27:23.059211703 +0000 UTC m=+0.059467750 container create e35999c892a2bd5f3ebdd6a802c9f9cad8a62b1aa88f5358cb15507e7b0fa893 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_wing, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  1 05:27:23 np0005540825 systemd[1]: Started libpod-conmon-e35999c892a2bd5f3ebdd6a802c9f9cad8a62b1aa88f5358cb15507e7b0fa893.scope.
Dec  1 05:27:23 np0005540825 podman[280009]: 2025-12-01 10:27:23.038681888 +0000 UTC m=+0.038938025 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:27:23 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:27:23 np0005540825 podman[280009]: 2025-12-01 10:27:23.167168949 +0000 UTC m=+0.167425026 container init e35999c892a2bd5f3ebdd6a802c9f9cad8a62b1aa88f5358cb15507e7b0fa893 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  1 05:27:23 np0005540825 podman[280009]: 2025-12-01 10:27:23.179043585 +0000 UTC m=+0.179299642 container start e35999c892a2bd5f3ebdd6a802c9f9cad8a62b1aa88f5358cb15507e7b0fa893 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_wing, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:27:23 np0005540825 podman[280009]: 2025-12-01 10:27:23.183194425 +0000 UTC m=+0.183450542 container attach e35999c892a2bd5f3ebdd6a802c9f9cad8a62b1aa88f5358cb15507e7b0fa893 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_wing, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:27:23 np0005540825 quirky_wing[280026]: 167 167
Dec  1 05:27:23 np0005540825 systemd[1]: libpod-e35999c892a2bd5f3ebdd6a802c9f9cad8a62b1aa88f5358cb15507e7b0fa893.scope: Deactivated successfully.
Dec  1 05:27:23 np0005540825 podman[280009]: 2025-12-01 10:27:23.188082025 +0000 UTC m=+0.188338112 container died e35999c892a2bd5f3ebdd6a802c9f9cad8a62b1aa88f5358cb15507e7b0fa893 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_wing, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec  1 05:27:23 np0005540825 systemd[1]: var-lib-containers-storage-overlay-b72ed44da2e0d5ebd7992c2c77c9b30846ff945b9747229e76f78f9e9cd6e3ce-merged.mount: Deactivated successfully.
Dec  1 05:27:23 np0005540825 nova_compute[256151]: 2025-12-01 10:27:23.231 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:27:23 np0005540825 podman[280009]: 2025-12-01 10:27:23.242825478 +0000 UTC m=+0.243081565 container remove e35999c892a2bd5f3ebdd6a802c9f9cad8a62b1aa88f5358cb15507e7b0fa893 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:27:23 np0005540825 systemd[1]: libpod-conmon-e35999c892a2bd5f3ebdd6a802c9f9cad8a62b1aa88f5358cb15507e7b0fa893.scope: Deactivated successfully.
Dec  1 05:27:23 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:27:23 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:27:23 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:27:23 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:27:23 np0005540825 podman[280049]: 2025-12-01 10:27:23.521067465 +0000 UTC m=+0.114790859 container create 473403d35b82aed1901fff8bf2d07545104113522c7af21771da9a6e79f522ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_germain, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:27:23 np0005540825 podman[280049]: 2025-12-01 10:27:23.449012122 +0000 UTC m=+0.042735586 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:27:23 np0005540825 systemd[1]: Started libpod-conmon-473403d35b82aed1901fff8bf2d07545104113522c7af21771da9a6e79f522ee.scope.
Dec  1 05:27:23 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:27:23 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad15ace9eecf192a2779e0b5acf62627101f656f440a15153d10feddabd4658a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:27:23 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad15ace9eecf192a2779e0b5acf62627101f656f440a15153d10feddabd4658a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:27:23 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad15ace9eecf192a2779e0b5acf62627101f656f440a15153d10feddabd4658a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:27:23 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad15ace9eecf192a2779e0b5acf62627101f656f440a15153d10feddabd4658a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:27:23 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad15ace9eecf192a2779e0b5acf62627101f656f440a15153d10feddabd4658a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:27:23 np0005540825 podman[280049]: 2025-12-01 10:27:23.620237228 +0000 UTC m=+0.213960662 container init 473403d35b82aed1901fff8bf2d07545104113522c7af21771da9a6e79f522ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_germain, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:27:23 np0005540825 podman[280049]: 2025-12-01 10:27:23.632108673 +0000 UTC m=+0.225832097 container start 473403d35b82aed1901fff8bf2d07545104113522c7af21771da9a6e79f522ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_germain, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  1 05:27:23 np0005540825 podman[280049]: 2025-12-01 10:27:23.635642957 +0000 UTC m=+0.229366381 container attach 473403d35b82aed1901fff8bf2d07545104113522c7af21771da9a6e79f522ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_germain, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  1 05:27:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:27:23.722Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:27:23 np0005540825 lucid_germain[280065]: --> passed data devices: 0 physical, 1 LVM
Dec  1 05:27:23 np0005540825 lucid_germain[280065]: --> All data devices are unavailable
Dec  1 05:27:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:27:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:24 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:27:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:24 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:27:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:24 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:27:24 np0005540825 systemd[1]: libpod-473403d35b82aed1901fff8bf2d07545104113522c7af21771da9a6e79f522ee.scope: Deactivated successfully.
Dec  1 05:27:24 np0005540825 podman[280049]: 2025-12-01 10:27:24.035123572 +0000 UTC m=+0.628847026 container died 473403d35b82aed1901fff8bf2d07545104113522c7af21771da9a6e79f522ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  1 05:27:24 np0005540825 systemd[1]: var-lib-containers-storage-overlay-ad15ace9eecf192a2779e0b5acf62627101f656f440a15153d10feddabd4658a-merged.mount: Deactivated successfully.
Dec  1 05:27:24 np0005540825 podman[280049]: 2025-12-01 10:27:24.089386493 +0000 UTC m=+0.683109907 container remove 473403d35b82aed1901fff8bf2d07545104113522c7af21771da9a6e79f522ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  1 05:27:24 np0005540825 systemd[1]: libpod-conmon-473403d35b82aed1901fff8bf2d07545104113522c7af21771da9a6e79f522ee.scope: Deactivated successfully.
Dec  1 05:27:24 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1105: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 576 B/s rd, 0 op/s
Dec  1 05:27:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:27:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:27:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:27:24.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:27:24.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:24 np0005540825 podman[280184]: 2025-12-01 10:27:24.905920151 +0000 UTC m=+0.065807978 container create 87bd397f3d82896b36c2abf9aa0f8172ccc524b885f69c4381df552d4e6cfc8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:27:24 np0005540825 systemd[1]: Started libpod-conmon-87bd397f3d82896b36c2abf9aa0f8172ccc524b885f69c4381df552d4e6cfc8f.scope.
Dec  1 05:27:24 np0005540825 podman[280184]: 2025-12-01 10:27:24.879935921 +0000 UTC m=+0.039823788 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:27:24 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:27:24 np0005540825 podman[280184]: 2025-12-01 10:27:24.999222108 +0000 UTC m=+0.159109895 container init 87bd397f3d82896b36c2abf9aa0f8172ccc524b885f69c4381df552d4e6cfc8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_tharp, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:27:25 np0005540825 podman[280184]: 2025-12-01 10:27:25.010850767 +0000 UTC m=+0.170738584 container start 87bd397f3d82896b36c2abf9aa0f8172ccc524b885f69c4381df552d4e6cfc8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec  1 05:27:25 np0005540825 podman[280184]: 2025-12-01 10:27:25.015540402 +0000 UTC m=+0.175428219 container attach 87bd397f3d82896b36c2abf9aa0f8172ccc524b885f69c4381df552d4e6cfc8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_tharp, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  1 05:27:25 np0005540825 naughty_tharp[280200]: 167 167
Dec  1 05:27:25 np0005540825 systemd[1]: libpod-87bd397f3d82896b36c2abf9aa0f8172ccc524b885f69c4381df552d4e6cfc8f.scope: Deactivated successfully.
Dec  1 05:27:25 np0005540825 podman[280205]: 2025-12-01 10:27:25.083581598 +0000 UTC m=+0.041772350 container died 87bd397f3d82896b36c2abf9aa0f8172ccc524b885f69c4381df552d4e6cfc8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:27:25 np0005540825 systemd[1]: var-lib-containers-storage-overlay-22f5f7d49fed226f45c49b224850648eb40010e6920524f2451c8bc88d2865b2-merged.mount: Deactivated successfully.
Dec  1 05:27:25 np0005540825 podman[280205]: 2025-12-01 10:27:25.123828386 +0000 UTC m=+0.082019128 container remove 87bd397f3d82896b36c2abf9aa0f8172ccc524b885f69c4381df552d4e6cfc8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_tharp, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:27:25 np0005540825 systemd[1]: libpod-conmon-87bd397f3d82896b36c2abf9aa0f8172ccc524b885f69c4381df552d4e6cfc8f.scope: Deactivated successfully.
Dec  1 05:27:25 np0005540825 podman[280226]: 2025-12-01 10:27:25.368858242 +0000 UTC m=+0.060413685 container create 65251b1d27328cb08877b365abcbee70fafa82fd7702723b90b9ff784fedd355 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_wu, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:27:25 np0005540825 systemd[1]: Started libpod-conmon-65251b1d27328cb08877b365abcbee70fafa82fd7702723b90b9ff784fedd355.scope.
Dec  1 05:27:25 np0005540825 podman[280226]: 2025-12-01 10:27:25.344929186 +0000 UTC m=+0.036484699 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:27:25 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:27:25 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd7146ab8dd8cf4db05a84d7700b80b730bf7d55f6bb03bea86dfbd31c8b7ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:27:25 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd7146ab8dd8cf4db05a84d7700b80b730bf7d55f6bb03bea86dfbd31c8b7ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:27:25 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd7146ab8dd8cf4db05a84d7700b80b730bf7d55f6bb03bea86dfbd31c8b7ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:27:25 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd7146ab8dd8cf4db05a84d7700b80b730bf7d55f6bb03bea86dfbd31c8b7ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:27:25 np0005540825 podman[280226]: 2025-12-01 10:27:25.468930929 +0000 UTC m=+0.160486452 container init 65251b1d27328cb08877b365abcbee70fafa82fd7702723b90b9ff784fedd355 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_wu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:27:25 np0005540825 podman[280226]: 2025-12-01 10:27:25.477204068 +0000 UTC m=+0.168759511 container start 65251b1d27328cb08877b365abcbee70fafa82fd7702723b90b9ff784fedd355 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_wu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec  1 05:27:25 np0005540825 podman[280226]: 2025-12-01 10:27:25.480640899 +0000 UTC m=+0.172196372 container attach 65251b1d27328cb08877b365abcbee70fafa82fd7702723b90b9ff784fedd355 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  1 05:27:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:27:25 np0005540825 epic_wu[280243]: {
Dec  1 05:27:25 np0005540825 epic_wu[280243]:    "1": [
Dec  1 05:27:25 np0005540825 epic_wu[280243]:        {
Dec  1 05:27:25 np0005540825 epic_wu[280243]:            "devices": [
Dec  1 05:27:25 np0005540825 epic_wu[280243]:                "/dev/loop3"
Dec  1 05:27:25 np0005540825 epic_wu[280243]:            ],
Dec  1 05:27:25 np0005540825 epic_wu[280243]:            "lv_name": "ceph_lv0",
Dec  1 05:27:25 np0005540825 epic_wu[280243]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:27:25 np0005540825 epic_wu[280243]:            "lv_size": "21470642176",
Dec  1 05:27:25 np0005540825 epic_wu[280243]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 05:27:25 np0005540825 epic_wu[280243]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:27:25 np0005540825 epic_wu[280243]:            "name": "ceph_lv0",
Dec  1 05:27:25 np0005540825 epic_wu[280243]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:27:25 np0005540825 epic_wu[280243]:            "tags": {
Dec  1 05:27:25 np0005540825 epic_wu[280243]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:27:25 np0005540825 epic_wu[280243]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:27:25 np0005540825 epic_wu[280243]:                "ceph.cephx_lockbox_secret": "",
Dec  1 05:27:25 np0005540825 epic_wu[280243]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 05:27:25 np0005540825 epic_wu[280243]:                "ceph.cluster_name": "ceph",
Dec  1 05:27:25 np0005540825 epic_wu[280243]:                "ceph.crush_device_class": "",
Dec  1 05:27:25 np0005540825 epic_wu[280243]:                "ceph.encrypted": "0",
Dec  1 05:27:25 np0005540825 epic_wu[280243]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 05:27:25 np0005540825 epic_wu[280243]:                "ceph.osd_id": "1",
Dec  1 05:27:25 np0005540825 epic_wu[280243]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 05:27:25 np0005540825 epic_wu[280243]:                "ceph.type": "block",
Dec  1 05:27:25 np0005540825 epic_wu[280243]:                "ceph.vdo": "0",
Dec  1 05:27:25 np0005540825 epic_wu[280243]:                "ceph.with_tpm": "0"
Dec  1 05:27:25 np0005540825 epic_wu[280243]:            },
Dec  1 05:27:25 np0005540825 epic_wu[280243]:            "type": "block",
Dec  1 05:27:25 np0005540825 epic_wu[280243]:            "vg_name": "ceph_vg0"
Dec  1 05:27:25 np0005540825 epic_wu[280243]:        }
Dec  1 05:27:25 np0005540825 epic_wu[280243]:    ]
Dec  1 05:27:25 np0005540825 epic_wu[280243]: }
Dec  1 05:27:25 np0005540825 systemd[1]: libpod-65251b1d27328cb08877b365abcbee70fafa82fd7702723b90b9ff784fedd355.scope: Deactivated successfully.
Dec  1 05:27:25 np0005540825 podman[280226]: 2025-12-01 10:27:25.813726471 +0000 UTC m=+0.505281904 container died 65251b1d27328cb08877b365abcbee70fafa82fd7702723b90b9ff784fedd355 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_wu, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec  1 05:27:26 np0005540825 systemd[1]: var-lib-containers-storage-overlay-3fd7146ab8dd8cf4db05a84d7700b80b730bf7d55f6bb03bea86dfbd31c8b7ef-merged.mount: Deactivated successfully.
Dec  1 05:27:26 np0005540825 podman[280226]: 2025-12-01 10:27:26.095108631 +0000 UTC m=+0.786664104 container remove 65251b1d27328cb08877b365abcbee70fafa82fd7702723b90b9ff784fedd355 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_wu, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:27:26 np0005540825 systemd[1]: libpod-conmon-65251b1d27328cb08877b365abcbee70fafa82fd7702723b90b9ff784fedd355.scope: Deactivated successfully.
Dec  1 05:27:26 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1106: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 576 B/s rd, 0 op/s
Dec  1 05:27:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:27:26.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:27:26.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:26 np0005540825 podman[280358]: 2025-12-01 10:27:26.753610804 +0000 UTC m=+0.034861227 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:27:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:27:27.343Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:27:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:27:27.345Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:27:27 np0005540825 podman[280358]: 2025-12-01 10:27:27.350249674 +0000 UTC m=+0.631500047 container create 74f9825d2c97baa9ff79619b741aeeeb2bfbbf47b3818062b09a7183831cfbd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_banzai, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:27:27 np0005540825 systemd[1]: Started libpod-conmon-74f9825d2c97baa9ff79619b741aeeeb2bfbbf47b3818062b09a7183831cfbd6.scope.
Dec  1 05:27:27 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:27:27 np0005540825 podman[280358]: 2025-12-01 10:27:27.456492875 +0000 UTC m=+0.737743308 container init 74f9825d2c97baa9ff79619b741aeeeb2bfbbf47b3818062b09a7183831cfbd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec  1 05:27:27 np0005540825 podman[280358]: 2025-12-01 10:27:27.468041751 +0000 UTC m=+0.749292134 container start 74f9825d2c97baa9ff79619b741aeeeb2bfbbf47b3818062b09a7183831cfbd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  1 05:27:27 np0005540825 podman[280358]: 2025-12-01 10:27:27.472356746 +0000 UTC m=+0.753607199 container attach 74f9825d2c97baa9ff79619b741aeeeb2bfbbf47b3818062b09a7183831cfbd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_banzai, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:27:27 np0005540825 pensive_banzai[280392]: 167 167
Dec  1 05:27:27 np0005540825 podman[280358]: 2025-12-01 10:27:27.474721008 +0000 UTC m=+0.755971351 container died 74f9825d2c97baa9ff79619b741aeeeb2bfbbf47b3818062b09a7183831cfbd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_banzai, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:27:27 np0005540825 systemd[1]: libpod-74f9825d2c97baa9ff79619b741aeeeb2bfbbf47b3818062b09a7183831cfbd6.scope: Deactivated successfully.
Dec  1 05:27:27 np0005540825 systemd[1]: var-lib-containers-storage-overlay-1df1a2f20b976dc1efee5bc4aab9d8bc769e133f148ba4ef9daefae3ba77daed-merged.mount: Deactivated successfully.
Dec  1 05:27:27 np0005540825 podman[280358]: 2025-12-01 10:27:27.520780651 +0000 UTC m=+0.802031024 container remove 74f9825d2c97baa9ff79619b741aeeeb2bfbbf47b3818062b09a7183831cfbd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_banzai, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  1 05:27:27 np0005540825 systemd[1]: libpod-conmon-74f9825d2c97baa9ff79619b741aeeeb2bfbbf47b3818062b09a7183831cfbd6.scope: Deactivated successfully.
Dec  1 05:27:27 np0005540825 nova_compute[256151]: 2025-12-01 10:27:27.620 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:27:27 np0005540825 podman[280428]: 2025-12-01 10:27:27.754494676 +0000 UTC m=+0.044229695 container create 414e4ba1180c8018cd2027b8d2c50a09e2207665adde83f85f47f29e1f11a0c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_mcnulty, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Dec  1 05:27:27 np0005540825 systemd[1]: Started libpod-conmon-414e4ba1180c8018cd2027b8d2c50a09e2207665adde83f85f47f29e1f11a0c1.scope.
Dec  1 05:27:27 np0005540825 podman[280428]: 2025-12-01 10:27:27.735899942 +0000 UTC m=+0.025635011 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:27:27 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:27:27 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de0692ed38e2f34cb0fb6c194c739617f5d1f6990a702a27c1e324211e3a18ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:27:27 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de0692ed38e2f34cb0fb6c194c739617f5d1f6990a702a27c1e324211e3a18ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:27:27 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de0692ed38e2f34cb0fb6c194c739617f5d1f6990a702a27c1e324211e3a18ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:27:27 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de0692ed38e2f34cb0fb6c194c739617f5d1f6990a702a27c1e324211e3a18ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:27:27 np0005540825 podman[280428]: 2025-12-01 10:27:27.912790409 +0000 UTC m=+0.202525468 container init 414e4ba1180c8018cd2027b8d2c50a09e2207665adde83f85f47f29e1f11a0c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_mcnulty, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  1 05:27:27 np0005540825 podman[280428]: 2025-12-01 10:27:27.926621846 +0000 UTC m=+0.216356825 container start 414e4ba1180c8018cd2027b8d2c50a09e2207665adde83f85f47f29e1f11a0c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 05:27:27 np0005540825 podman[280428]: 2025-12-01 10:27:27.930387166 +0000 UTC m=+0.220122155 container attach 414e4ba1180c8018cd2027b8d2c50a09e2207665adde83f85f47f29e1f11a0c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_mcnulty, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  1 05:27:28 np0005540825 nova_compute[256151]: 2025-12-01 10:27:28.233 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:27:28 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1107: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 576 B/s rd, 0 op/s
Dec  1 05:27:28 np0005540825 lvm[280519]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 05:27:28 np0005540825 lvm[280519]: VG ceph_vg0 finished
Dec  1 05:27:28 np0005540825 sad_mcnulty[280444]: {}
Dec  1 05:27:28 np0005540825 systemd[1]: libpod-414e4ba1180c8018cd2027b8d2c50a09e2207665adde83f85f47f29e1f11a0c1.scope: Deactivated successfully.
Dec  1 05:27:28 np0005540825 systemd[1]: libpod-414e4ba1180c8018cd2027b8d2c50a09e2207665adde83f85f47f29e1f11a0c1.scope: Consumed 1.264s CPU time.
Dec  1 05:27:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:27:28.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:28 np0005540825 podman[280522]: 2025-12-01 10:27:28.703364908 +0000 UTC m=+0.031316203 container died 414e4ba1180c8018cd2027b8d2c50a09e2207665adde83f85f47f29e1f11a0c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_mcnulty, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  1 05:27:28 np0005540825 systemd[1]: var-lib-containers-storage-overlay-de0692ed38e2f34cb0fb6c194c739617f5d1f6990a702a27c1e324211e3a18ee-merged.mount: Deactivated successfully.
Dec  1 05:27:28 np0005540825 podman[280522]: 2025-12-01 10:27:28.752243275 +0000 UTC m=+0.080194540 container remove 414e4ba1180c8018cd2027b8d2c50a09e2207665adde83f85f47f29e1f11a0c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec  1 05:27:28 np0005540825 systemd[1]: libpod-conmon-414e4ba1180c8018cd2027b8d2c50a09e2207665adde83f85f47f29e1f11a0c1.scope: Deactivated successfully.
Dec  1 05:27:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:27:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:27:28.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:27:28 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 05:27:28 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:27:28 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 05:27:28 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:27:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:27:28.861Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:27:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:29 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:27:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:29 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:27:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:29 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:27:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:29 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:27:29 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:27:29 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:27:30 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1108: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 576 B/s rd, 0 op/s
Dec  1 05:27:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:27:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:27:30.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:27:30.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:27:31] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Dec  1 05:27:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:27:31] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Dec  1 05:27:32 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1109: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 576 B/s rd, 0 op/s
Dec  1 05:27:32 np0005540825 nova_compute[256151]: 2025-12-01 10:27:32.668 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:27:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:27:32.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:27:32.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:33 np0005540825 nova_compute[256151]: 2025-12-01 10:27:33.236 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:27:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:27:33.724Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:27:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:27:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:27:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:27:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:34 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:27:34 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1110: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:27:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-crash-compute-0[79836]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Dec  1 05:27:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:27:34.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:27:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:27:34.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:27:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:27:36 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1111: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:27:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:27:36.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:27:36.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:37 np0005540825 podman[280570]: 2025-12-01 10:27:37.204443986 +0000 UTC m=+0.062313526 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 05:27:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:27:37.346Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:27:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:27:37.347Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:27:37 np0005540825 nova_compute[256151]: 2025-12-01 10:27:37.670 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:27:38 np0005540825 nova_compute[256151]: 2025-12-01 10:27:38.239 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:27:38 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1112: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:27:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:27:38.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:27:38.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:27:38.863Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:27:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:27:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:27:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:27:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:39 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:27:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_10:27:39
Dec  1 05:27:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 05:27:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 05:27:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', '.nfs', 'default.rgw.meta', 'vms', 'images', 'cephfs.cephfs.meta', '.mgr', 'backups', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control']
Dec  1 05:27:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 05:27:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:27:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:27:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:27:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:27:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:27:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:27:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:27:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:27:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 05:27:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:27:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:27:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:27:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:27:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:27:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:27:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:27:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:27:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:27:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  1 05:27:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:27:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:27:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:27:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:27:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:27:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:27:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:27:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:27:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:27:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:27:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:27:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:27:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:27:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:27:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 05:27:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 05:27:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:27:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:27:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:27:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:27:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:27:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:27:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:27:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:27:40 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1113: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:27:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:27:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:27:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:27:40.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:27:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:27:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:27:40.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:27:41 np0005540825 podman[280593]: 2025-12-01 10:27:41.230170411 +0000 UTC m=+0.089362333 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 05:27:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:27:41] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:27:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:27:41] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:27:42 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1114: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:27:42 np0005540825 nova_compute[256151]: 2025-12-01 10:27:42.675 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:27:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:27:42.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:27:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:27:42.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:27:43 np0005540825 nova_compute[256151]: 2025-12-01 10:27:43.241 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:27:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:27:43.725Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:27:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:27:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:27:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:27:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:44 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:27:44 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1115: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:27:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:27:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:27:44.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:27:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:27:44.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:27:46 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1116: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:27:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:27:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:27:46.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:27:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:27:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:27:46.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:27:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:27:47.348Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:27:47 np0005540825 systemd-logind[789]: New session 56 of user zuul.
Dec  1 05:27:47 np0005540825 systemd[1]: Started Session 56 of User zuul.
Dec  1 05:27:47 np0005540825 nova_compute[256151]: 2025-12-01 10:27:47.676 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:27:48 np0005540825 nova_compute[256151]: 2025-12-01 10:27:48.243 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:27:48 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1117: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:27:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:27:48.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:27:48.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:27:48.864Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:27:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:27:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:27:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:27:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:49 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:27:50 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.25555 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:27:50 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26048 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:27:50 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1118: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:27:50 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.16311 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:27:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:27:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:27:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:27:50.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:27:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:27:50.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:50 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.25564 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:27:50 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26057 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:27:50 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.16317 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:27:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:27:51] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:27:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:27:51] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:27:51 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Dec  1 05:27:51 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2039892442' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec  1 05:27:52 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1119: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:27:52 np0005540825 nova_compute[256151]: 2025-12-01 10:27:52.706 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:27:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:27:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:27:52.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:27:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:27:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:27:52.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:27:53 np0005540825 nova_compute[256151]: 2025-12-01 10:27:53.245 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:27:53 np0005540825 podman[280961]: 2025-12-01 10:27:53.268389069 +0000 UTC m=+0.124189278 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 05:27:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:27:53.726Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:27:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:27:53.726Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:27:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:27:53.726Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:27:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:27:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:27:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:27:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:54 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:27:54 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1120: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:27:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:27:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:27:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:27:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:27:54.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:27:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:27:54.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:27:56 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1121: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:27:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:27:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:27:56.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:27:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:27:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:27:56.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:27:57 np0005540825 nova_compute[256151]: 2025-12-01 10:27:57.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:27:57 np0005540825 nova_compute[256151]: 2025-12-01 10:27:57.028 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 05:27:57 np0005540825 nova_compute[256151]: 2025-12-01 10:27:57.028 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 05:27:57 np0005540825 nova_compute[256151]: 2025-12-01 10:27:57.053 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 05:27:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:27:57.349Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:27:57 np0005540825 ovs-vsctl[281047]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec  1 05:27:57 np0005540825 nova_compute[256151]: 2025-12-01 10:27:57.750 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:27:58 np0005540825 nova_compute[256151]: 2025-12-01 10:27:58.247 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:27:58 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1122: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:27:58 np0005540825 virtqemud[255660]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec  1 05:27:58 np0005540825 virtqemud[255660]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec  1 05:27:58 np0005540825 virtqemud[255660]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec  1 05:27:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:27:58.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:58 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.25576 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:27:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:27:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:27:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:27:58.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:27:58 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26069 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:27:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:27:58.866Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:27:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:27:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:59 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:27:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:59 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:27:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:27:59 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:27:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Dec  1 05:27:59 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec  1 05:27:59 np0005540825 nova_compute[256151]: 2025-12-01 10:27:59.049 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:27:59 np0005540825 ceph-mds[95644]: mds.cephfs.compute-0.xijran asok_command: cache status {prefix=cache status} (starting...)
Dec  1 05:27:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Dec  1 05:27:59 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec  1 05:27:59 np0005540825 lvm[281357]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 05:27:59 np0005540825 lvm[281357]: VG ceph_vg0 finished
Dec  1 05:27:59 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26084 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:27:59 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.25588 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:27:59 np0005540825 ceph-mds[95644]: mds.cephfs.compute-0.xijran asok_command: client ls {prefix=client ls} (starting...)
Dec  1 05:27:59 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.25600 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:27:59 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26096 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:27:59 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.16338 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:27:59 np0005540825 ceph-mds[95644]: mds.cephfs.compute-0.xijran asok_command: damage ls {prefix=damage ls} (starting...)
Dec  1 05:28:00 np0005540825 nova_compute[256151]: 2025-12-01 10:28:00.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:28:00 np0005540825 nova_compute[256151]: 2025-12-01 10:28:00.028 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:28:00 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26108 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:00 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.25612 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:00 np0005540825 ceph-mds[95644]: mds.cephfs.compute-0.xijran asok_command: dump loads {prefix=dump loads} (starting...)
Dec  1 05:28:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Dec  1 05:28:00 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3517039617' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec  1 05:28:00 np0005540825 ceph-mds[95644]: mds.cephfs.compute-0.xijran asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec  1 05:28:00 np0005540825 ceph-mds[95644]: mds.cephfs.compute-0.xijran asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec  1 05:28:00 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.16356 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:00 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1123: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:28:00 np0005540825 ceph-mds[95644]: mds.cephfs.compute-0.xijran asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec  1 05:28:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:28:00 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3941467253' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:28:00 np0005540825 ceph-mds[95644]: mds.cephfs.compute-0.xijran asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec  1 05:28:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:28:00 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.25630 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:28:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:28:00.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:28:00 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26138 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:00 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.16368 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:28:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:28:00.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:28:00 np0005540825 ceph-mds[95644]: mds.cephfs.compute-0.xijran asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec  1 05:28:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Dec  1 05:28:00 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1656107208' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec  1 05:28:01 np0005540825 nova_compute[256151]: 2025-12-01 10:28:01.022 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:28:01 np0005540825 nova_compute[256151]: 2025-12-01 10:28:01.041 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:28:01 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.25645 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:01 np0005540825 ceph-mds[95644]: mds.cephfs.compute-0.xijran asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec  1 05:28:01 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26150 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:01 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.16383 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:01 np0005540825 ceph-mds[95644]: mds.cephfs.compute-0.xijran asok_command: ops {prefix=ops} (starting...)
Dec  1 05:28:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:28:01] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec  1 05:28:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:28:01] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec  1 05:28:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Dec  1 05:28:01 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3403051108' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec  1 05:28:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Dec  1 05:28:01 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec  1 05:28:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Dec  1 05:28:01 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec  1 05:28:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Dec  1 05:28:01 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2495423836' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec  1 05:28:01 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.16401 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:01 np0005540825 ceph-mds[95644]: mds.cephfs.compute-0.xijran asok_command: session ls {prefix=session ls} (starting...)
Dec  1 05:28:02 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec  1 05:28:02 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1749002467' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec  1 05:28:02 np0005540825 ceph-mds[95644]: mds.cephfs.compute-0.xijran asok_command: status {prefix=status} (starting...)
Dec  1 05:28:02 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.16419 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:02 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1124: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:28:02 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.25687 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T10:28:02.428+0000 7f5445f76640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  1 05:28:02 np0005540825 ceph-mgr[74709]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  1 05:28:02 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec  1 05:28:02 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/767115279' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec  1 05:28:02 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26198 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:02 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T10:28:02.600+0000 7f5445f76640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  1 05:28:02 np0005540825 ceph-mgr[74709]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  1 05:28:02 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Dec  1 05:28:02 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1199932636' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec  1 05:28:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:28:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:28:02.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:28:02 np0005540825 nova_compute[256151]: 2025-12-01 10:28:02.752 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:28:02 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec  1 05:28:02 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1545124193' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec  1 05:28:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:28:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:28:02.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:28:02 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec  1 05:28:02 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2634542928' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec  1 05:28:03 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Dec  1 05:28:03 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1457628468' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec  1 05:28:03 np0005540825 nova_compute[256151]: 2025-12-01 10:28:03.249 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:28:03 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  1 05:28:03 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/433939349' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  1 05:28:03 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.16461 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T10:28:03.477+0000 7f5445f76640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  1 05:28:03 np0005540825 ceph-mgr[74709]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  1 05:28:03 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26240 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:03 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.25723 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:28:03.727Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:28:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:28:03.728Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:28:03 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec  1 05:28:03 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3691430398' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec  1 05:28:03 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Dec  1 05:28:03 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3518594572' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec  1 05:28:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:28:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:28:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:28:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:04 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:28:04 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26261 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:04 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.25738 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec  1 05:28:04 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1731495337' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec  1 05:28:04 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1125: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:28:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Dec  1 05:28:04 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/216293876' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec  1 05:28:04 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26279 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:04 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.25759 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:28:04.585 163291 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:28:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:28:04.586 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:28:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:28:04.586 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:28:04 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.16503 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:28:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:28:04.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:28:04 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26294 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:28:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:28:04.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:28:04 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.25774 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec  1 05:28:04 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2946393104' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec  1 05:28:05 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.16518 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:05 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26306 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec  1 05:28:05 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/282264157' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec  1 05:28:05 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.25795 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:05 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.16536 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957643 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 4775936 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 4775936 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 4775936 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 4767744 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 4767744 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957643 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80134144 unmapped: 4759552 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80134144 unmapped: 4759552 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80134144 unmapped: 4759552 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 4751360 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 4751360 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957643 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea04656000 session 0x55ea069e2780
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea044ccc00 session 0x55ea05bf2d20
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 4751360 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 4743168 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 4743168 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 4734976 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 4734976 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957643 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 4726784 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80175104 unmapped: 4718592 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80175104 unmapped: 4718592 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 4710400 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 4710400 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957643 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 4710400 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.221664429s of 30.241024017s, submitted: 5
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 4702208 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 4702208 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80199680 unmapped: 4694016 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80199680 unmapped: 4694016 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957775 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 4685824 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 4685824 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 4677632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 4677632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 4677632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956593 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80224256 unmapped: 4669440 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80224256 unmapped: 4669440 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80224256 unmapped: 4669440 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 4661248 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.427317619s of 13.440299034s, submitted: 3
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 4661248 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956461 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 4653056 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80248832 unmapped: 4644864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80248832 unmapped: 4644864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80257024 unmapped: 4636672 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80257024 unmapped: 4636672 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956461 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 4628480 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 4628480 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80273408 unmapped: 4620288 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80273408 unmapped: 4620288 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80273408 unmapped: 4620288 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956461 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 4612096 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 4612096 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 4603904 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 4603904 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 4603904 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956461 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80297984 unmapped: 4595712 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80297984 unmapped: 4595712 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80306176 unmapped: 4587520 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80306176 unmapped: 4587520 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 4579328 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956461 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 4579328 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 4579328 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 4571136 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 4571136 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 4562944 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956461 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 4562944 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80347136 unmapped: 4546560 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80347136 unmapped: 4546560 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 4538368 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 4538368 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956461 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 4538368 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 4530176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 4530176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 4521984 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 4521984 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956461 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 4521984 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 4513792 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 4513792 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80388096 unmapped: 4505600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80388096 unmapped: 4505600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956461 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 4497408 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea044cc800 session 0x55ea04d0c1e0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea05b25800 session 0x55ea06d60780
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 4497408 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 4497408 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80404480 unmapped: 4489216 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80404480 unmapped: 4489216 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956461 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 4481024 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 4472832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 4472832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 4464640 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 4464640 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956461 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 4464640 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 52.091716766s of 52.094764709s, submitted: 1
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 4456448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 4456448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 4448256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 4448256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958105 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 4440064 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 4440064 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 4431872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 4431872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 4415488 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958105 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 4415488 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 4415488 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 4407296 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 4407296 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80494592 unmapped: 4399104 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.150740623s of 13.165208817s, submitted: 2
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957973 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80494592 unmapped: 4399104 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80494592 unmapped: 4399104 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 4390912 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 4390912 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea0315dc00 session 0x55ea045e4b40
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 4382720 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957973 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 4382720 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 4366336 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 4366336 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 4366336 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80535552 unmapped: 4358144 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957973 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80535552 unmapped: 4358144 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 4349952 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 4349952 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 4349952 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80551936 unmapped: 4341760 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.310926437s of 15.317010880s, submitted: 1
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958105 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80551936 unmapped: 4341760 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 4333568 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 4333568 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 4325376 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 4325376 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959617 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80576512 unmapped: 4317184 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80576512 unmapped: 4317184 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80576512 unmapped: 4317184 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 4308992 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 4308992 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959026 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 4300800 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80601088 unmapped: 4292608 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80601088 unmapped: 4292608 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80609280 unmapped: 4284416 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80609280 unmapped: 4284416 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959026 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80617472 unmapped: 4276224 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.970130920s of 15.980693817s, submitted: 3
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80617472 unmapped: 4276224 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80625664 unmapped: 4268032 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80625664 unmapped: 4268032 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80633856 unmapped: 4259840 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958894 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80633856 unmapped: 4259840 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80633856 unmapped: 4259840 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80642048 unmapped: 4251648 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80642048 unmapped: 4251648 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80642048 unmapped: 4251648 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958894 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80650240 unmapped: 4243456 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80650240 unmapped: 4243456 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80658432 unmapped: 4235264 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 8395 writes, 34K keys, 8395 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s#012Cumulative WAL: 8395 writes, 1674 syncs, 5.01 writes per sync, written: 0.02 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8395 writes, 34K keys, 8395 commit groups, 1.0 writes per commit group, ingest: 21.35 MB, 0.04 MB/s#012Interval WAL: 8395 writes, 1674 syncs, 5.01 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ea023ad350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ea023ad350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80715776 unmapped: 4177920 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 4169728 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958894 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 4169728 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 4169728 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 4161536 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 4161536 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 4153344 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958894 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 4145152 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80764928 unmapped: 4128768 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80764928 unmapped: 4128768 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80764928 unmapped: 4128768 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 4120576 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958894 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 4120576 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 4120576 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 4112384 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 4112384 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 4104192 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958894 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 4104192 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 4104192 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 4087808 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 4087808 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 4079616 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958894 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 4079616 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80822272 unmapped: 4071424 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80822272 unmapped: 4071424 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80822272 unmapped: 4071424 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 4063232 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958894 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 4063232 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80846848 unmapped: 4046848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80846848 unmapped: 4046848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80846848 unmapped: 4046848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 4038656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958894 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80863232 unmapped: 4030464 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80863232 unmapped: 4030464 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80863232 unmapped: 4030464 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80871424 unmapped: 4022272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80871424 unmapped: 4022272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958894 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80879616 unmapped: 4014080 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80879616 unmapped: 4014080 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80879616 unmapped: 4014080 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80887808 unmapped: 4005888 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80887808 unmapped: 4005888 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958894 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80896000 unmapped: 3997696 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea04ac8800 session 0x55ea06d61860
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea044ccc00 session 0x55ea04cec3c0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80896000 unmapped: 3997696 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80896000 unmapped: 3997696 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 3989504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 3989504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958894 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 3989504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 3981312 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 3981312 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 3973120 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 3973120 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958894 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80928768 unmapped: 3964928 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 65.908096313s of 65.911453247s, submitted: 1
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 3956736 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 3956736 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 3948544 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 3932160 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959026 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 3932160 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 3932160 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 3923968 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 3923968 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 3915776 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960538 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 3915776 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 3915776 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80986112 unmapped: 3907584 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.158259392s of 12.165806770s, submitted: 2
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80986112 unmapped: 3907584 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 3899392 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959947 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 3899392 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 3899392 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81002496 unmapped: 3891200 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81002496 unmapped: 3891200 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 3883008 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959815 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 3883008 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81018880 unmapped: 3874816 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81018880 unmapped: 3874816 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81018880 unmapped: 3874816 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea0467c800 session 0x55ea04d0c780
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81027072 unmapped: 3866624 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea044cc800 session 0x55ea0679c960
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea0315dc00 session 0x55ea069465a0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959815 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81027072 unmapped: 3866624 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81027072 unmapped: 3866624 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81035264 unmapped: 3858432 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81035264 unmapped: 3858432 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81043456 unmapped: 3850240 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959815 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81043456 unmapped: 3850240 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81043456 unmapped: 3850240 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81051648 unmapped: 3842048 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81051648 unmapped: 3842048 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81059840 unmapped: 3833856 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959815 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81059840 unmapped: 3833856 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.929027557s of 22.110603333s, submitted: 2
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81068032 unmapped: 3825664 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81068032 unmapped: 3825664 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81068032 unmapped: 3825664 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81076224 unmapped: 3817472 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960079 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81076224 unmapped: 3817472 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81084416 unmapped: 3809280 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81084416 unmapped: 3809280 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 3801088 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 3801088 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 961591 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81100800 unmapped: 3792896 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81100800 unmapped: 3792896 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81100800 unmapped: 3792896 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.994256973s of 13.005094528s, submitted: 3
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81108992 unmapped: 3784704 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81108992 unmapped: 3784704 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960868 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81117184 unmapped: 3776512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26312 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81117184 unmapped: 3776512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81117184 unmapped: 3776512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 3768320 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 3768320 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960736 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81133568 unmapped: 3760128 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81166336 unmapped: 3727360 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81166336 unmapped: 3727360 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 3719168 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 3719168 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960736 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea05b25800 session 0x55ea069470e0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea03f3e000 session 0x55ea045bf2c0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 3710976 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 3710976 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81190912 unmapped: 3702784 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81190912 unmapped: 3702784 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81190912 unmapped: 3702784 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960736 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 3694592 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 3694592 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 3694592 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 3686400 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 3686400 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960736 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 3686400 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.318239212s of 22.709581375s, submitted: 3
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 2637824 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 2637824 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 2637824 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 2637824 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960868 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 2629632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 2629632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 2629632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 2629632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 2629632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960868 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 3678208 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 3678208 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 3678208 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 3678208 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 3678208 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960277 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 3678208 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.990248680s of 14.997964859s, submitted: 2
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 3678208 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 3678208 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 3678208 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 3678208 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960145 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 3678208 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 3678208 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 3678208 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 3678208 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 3678208 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960145 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 3678208 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 3678208 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.054379463s of 11.058124542s, submitted: 1
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 3645440 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 3547136 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 3448832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960145 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 3358720 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 3358720 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 3358720 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 3358720 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 3358720 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960145 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 3358720 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 3358720 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 3358720 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 3358720 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 3358720 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea0315dc00 session 0x55ea03950000
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960145 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 3358720 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 3358720 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 3358720 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea03158400 session 0x55ea06667c20
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea06506400 session 0x55ea06d60000
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 3358720 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 3358720 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960145 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 3358720 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 3358720 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 3358720 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 3358720 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 3358720 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.205194473s of 23.217275620s, submitted: 361
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960277 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 3358720 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 3358720 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 3358720 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 3358720 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 3358720 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 961921 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 3358720 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963433 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.560570717s of 11.582404137s, submitted: 4
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964945 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963499 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963499 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea03eb1c00 session 0x55ea06848d20
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea044cc800 session 0x55ea06c174a0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963499 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963499 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.158605576s of 25.176582336s, submitted: 5
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963631 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963631 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.222551346s of 13.232032776s, submitted: 2
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 962908 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 962908 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 962908 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 962908 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 962908 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 962908 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea03158400 session 0x55ea069e2b40
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 962908 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 962908 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.487388611s of 38.491107941s, submitted: 1
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963040 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea05c62c00 session 0x55ea0679d0e0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea04ac8800 session 0x55ea06946f00
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963040 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963040 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.240982056s of 13.267016411s, submitted: 1
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 962908 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963040 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964552 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 3653632 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.218000412s of 18.261745453s, submitted: 3
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964420 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964420 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964420 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964420 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964420 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964420 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964420 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964420 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964420 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea0315dc00 session 0x55ea069465a0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964420 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964420 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 54.477725983s of 54.481185913s, submitted: 1
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964552 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966064 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965341 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965341 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965341 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965341 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965341 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965341 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 3620864 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 3604480 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965341 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 3604480 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 3604480 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 3604480 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 3604480 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 3604480 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965341 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 3604480 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 3604480 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 3604480 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 3604480 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea03158c00 session 0x55ea06acd860
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea0467c800 session 0x55ea04d4f0e0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 3604480 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965341 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 3604480 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 3604480 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea06506400 session 0x55ea06848d20
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 3604480 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 3604480 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 3604480 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965341 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 3604480 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 3604480 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 3604480 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 3604480 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 3588096 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965341 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 61.464328766s of 61.515007019s, submitted: 4
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 3588096 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 3588096 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 3588096 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 3588096 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 3588096 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965605 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 3588096 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 3579904 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 3579904 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 3579904 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 3579904 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 967117 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 3579904 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 3579904 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.879024506s of 12.032022476s, submitted: 3
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 3579904 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 3579904 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 3579904 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966394 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 3579904 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 3579904 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 3579904 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 3579904 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 3555328 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966262 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 3555328 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 3555328 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 3555328 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea0315dc00 session 0x55ea05bdbe00
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea03158400 session 0x55ea03951860
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 3555328 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea03158c00 session 0x55ea06665a40
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 3555328 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966262 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 3555328 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 3555328 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 3555328 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 3555328 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 3555328 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966262 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 3555328 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 3555328 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 3555328 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 3555328 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.752355576s of 21.768016815s, submitted: 3
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 3555328 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966526 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 3555328 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 3555328 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 3555328 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 3555328 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 3538944 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 968038 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 3538944 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 3538944 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 3538944 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 3538944 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 3538944 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 968038 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 3538944 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.076677322s of 12.088689804s, submitted: 3
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 3538944 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 3538944 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 3530752 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea044cc800 session 0x55ea0679c960
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea03eb1c00 session 0x55ea03efb860
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 3530752 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 967183 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 3506176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 3506176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 3506176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 3506176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 3506176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 967183 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 3506176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 3506176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 3506176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 3506176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 3489792 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.910959244s of 13.925210953s, submitted: 3
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 967315 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 3481600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 3481600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 3481600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 3481600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 3481600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 968827 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 3481600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 3481600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 3481600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 3481600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 3481600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969748 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 3481600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 3481600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 3481600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 3481600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 3473408 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.706723213s of 14.736872673s, submitted: 4
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969616 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 3473408 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 3473408 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 3473408 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 3473408 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 3457024 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969616 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 3457024 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 3457024 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 3457024 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 3457024 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 3457024 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969616 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 3457024 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 3457024 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 3457024 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 3457024 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 3448832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969616 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 3448832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 3448832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 3448832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 3448832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 3448832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969616 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 3448832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 3448832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 3448832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 3448832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 3432448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969616 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 3432448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 3432448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 3432448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 3432448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 3432448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969616 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 3432448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 3432448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 3424256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 3424256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 3424256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969616 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 3424256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 3424256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 3424256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 3424256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 3424256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969616 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 3424256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 3424256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 3424256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 3424256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 3407872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969616 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 3407872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 3407872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 3407872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea04a10800 session 0x55ea05bf2d20
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea04ac8800 session 0x55ea03efad20
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 3407872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 3407872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969616 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 3407872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 3407872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 3407872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 3407872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 3407872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969616 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 3407872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 3407872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 3407872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 3399680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 59.579448700s of 59.583278656s, submitted: 1
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea03158c00 session 0x55ea05bf6b40
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea044cc800 session 0x55ea04d56b40
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 3399680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969748 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 3399680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 3399680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 3399680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 3399680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81510400 unmapped: 3383296 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971260 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81526784 unmapped: 3366912 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81526784 unmapped: 3366912 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81526784 unmapped: 3366912 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81526784 unmapped: 3366912 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: mgrc ms_handle_reset ms_handle_reset con 0x55ea04ca1000
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1444264366
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1444264366,v1:192.168.122.100:6801/1444264366]
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: mgrc handle_mgr_configure stats_period=5
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 3186688 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea04a11400 session 0x55ea0684bc20
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.972988129s of 10.982455254s, submitted: 2
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971392 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 3170304 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 3170304 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 3170304 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 3170304 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 3145728 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972181 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 3145728 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 3145728 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 3145728 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 3145728 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 3145728 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 975205 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 3145728 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 3145728 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.104986191s of 12.128035545s, submitted: 6
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 3145728 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 3145728 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 3145728 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974482 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 3145728 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 3145728 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 3145728 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 3145728 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 3129344 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974482 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 3129344 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 3129344 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea03eb1c00 session 0x55ea038c6d20
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea0315dc00 session 0x55ea06c174a0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 3129344 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 3129344 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 3129344 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974482 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 3129344 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 3129344 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 3129344 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 3129344 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea06506400 session 0x55ea06664780
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea04ac8800 session 0x55ea06acd0e0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 3129344 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974482 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 3129344 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 3129344 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 3129344 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.088794708s of 21.097955704s, submitted: 3
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 3112960 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 3112960 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974614 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 3112960 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 3104768 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 3104768 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 3088384 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 3088384 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974746 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 3072000 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 3072000 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 3072000 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 3072000 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 3072000 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974155 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 3072000 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 9221 writes, 35K keys, 9221 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 9221 writes, 2074 syncs, 4.45 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 826 writes, 1269 keys, 826 commit groups, 1.0 writes per commit group, ingest: 0.42 MB, 0.00 MB/s#012Interval WAL: 826 writes, 400 syncs, 2.06 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ea023ad350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ea023ad350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.976496696s of 12.989195824s, submitted: 3
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 3072000 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 3039232 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 3039232 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 3039232 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973432 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.25813 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 3039232 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 3039232 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 3039232 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 3039232 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 3039232 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973300 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 3039232 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973300 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea03158400 session 0x55ea05bf63c0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea05c62c00 session 0x55ea03efa960
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973300 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread fragmentation_score=0.000030 took=0.000075s
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973300 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973300 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.629129410s of 29.642702103s, submitted: 3
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973432 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974944 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.602149010s of 13.611645699s, submitted: 2
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974812 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974812 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974812 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974812 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974812 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974812 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea05b23000 session 0x55ea05c59e00
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea06639800 session 0x55ea05c592c0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974812 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974812 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 40.389781952s of 40.393127441s, submitted: 1
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974944 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976456 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977377 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.319105148s of 13.360455513s, submitted: 4
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977245 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea03eb1c00 session 0x55ea069e3e00
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea0315dc00 session 0x55ea05c58960
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea06506400 session 0x55ea066670e0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea03eb0000 session 0x55ea06946f00
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977245 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977245 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.696228981s of 14.700200081s, submitted: 1
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 1941504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 1941504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 1941504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977509 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 1941504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 1941504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 1941504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 1941504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 1941504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977509 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 1941504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 2990080 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 2990080 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 2990080 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.131916046s of 12.145732880s, submitted: 3
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 2990080 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976195 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 2990080 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 2990080 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 2990080 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 2990080 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 3006464 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976063 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 3006464 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 3006464 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 3006464 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 3006464 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 3006464 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976063 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 3006464 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 3006464 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 3006464 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 3006464 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 3006464 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976063 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 3006464 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 3006464 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976063 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.728481293s of 25.739419937s, submitted: 3
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976063 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 2924544 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 2768896 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83255296 unmapped: 2686976 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83255296 unmapped: 2686976 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3177369341' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83255296 unmapped: 2686976 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976063 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83255296 unmapped: 2686976 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976063 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976063 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976063 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea03eb1c00 session 0x55ea03ed21e0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976063 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea05b23000 session 0x55ea03d9f860
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea0315dc00 session 0x55ea06849e00
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976063 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.944995880s of 32.024192810s, submitted: 378
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 2670592 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 2670592 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 2670592 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 2670592 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976195 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 2670592 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 2670592 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 2670592 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 2670592 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 2670592 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976327 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 2670592 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 2670592 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 2670592 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 2670592 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.414603233s of 12.424942970s, submitted: 2
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 2670592 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977839 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 2670592 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 2670592 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 2670592 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 2670592 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 2670592 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977707 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83279872 unmapped: 2662400 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977575 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977575 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977575 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977575 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977575 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977575 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977575 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea0315dc00 session 0x55ea06acc5a0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977575 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea03eb0000 session 0x55ea06d60d20
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea03eb1c00 session 0x55ea04d56b40
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977575 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 55.334781647s of 55.353782654s, submitted: 4
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977707 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977839 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.326207161s of 10.340394974s, submitted: 3
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978760 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980272 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980008 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea06639000 session 0x55ea05bf6780
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea06506400 session 0x55ea06ab25a0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980008 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980008 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.171087265s of 24.185573578s, submitted: 4
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980140 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984676 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984085 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.075248718s of 12.094985962s, submitted: 5
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983362 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983362 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea06505800 session 0x55ea06667e00
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea04ac8800 session 0x55ea05c59680
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983362 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983362 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.952816010s of 22.962663651s, submitted: 2
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983494 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983494 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982312 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.068272591s of 12.090098381s, submitted: 3
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 2605056 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 2605056 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981589 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 2605056 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 2605056 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 2605056 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea0315dc00 session 0x55ea04ce2f00
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea05b23000 session 0x55ea05bf30e0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 2605056 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 2605056 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981589 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 2605056 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 2605056 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 2605056 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 2605056 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 2605056 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981589 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 2605056 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 2605056 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 2605056 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 2605056 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.562101364s of 18.569944382s, submitted: 2
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 2596864 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981721 data_alloc: 218103808 data_used: 270336
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 2596864 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 136 handle_osd_map epochs [136,137], i have 136, src has [1,137]
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 2596864 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 2596864 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 138 ms_handle_reset con 0x55ea03eb1400 session 0x55ea03d9f680
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 138 ms_handle_reset con 0x55ea03eb1c00 session 0x55ea04cedc20
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb987000/0x0/0x4ffc00000, data 0xdcb720/0xe84000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 19218432 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 138 handle_osd_map epochs [138,139], i have 138, src has [1,139]
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 139 ms_handle_reset con 0x55ea0315bc00 session 0x55ea0684bc20
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 19202048 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080011 data_alloc: 218103808 data_used: 274432
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 19202048 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fb982000/0x0/0x4ffc00000, data 0xdcd8a1/0xe88000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 140 ms_handle_reset con 0x55ea0315dc00 session 0x55ea05bdab40
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 18137088 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 18120704 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 18120704 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 18120704 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1081781 data_alloc: 218103808 data_used: 274432
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 18120704 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fb980000/0x0/0x4ffc00000, data 0xdcf9ff/0xe8b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.686079979s of 12.876276970s, submitted: 45
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 17072128 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fb980000/0x0/0x4ffc00000, data 0xdcf9ff/0xe8b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 17072128 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fb97d000/0x0/0x4ffc00000, data 0xdd1a27/0xe8e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 17072128 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 17072128 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083699 data_alloc: 218103808 data_used: 274432
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fb97e000/0x0/0x4ffc00000, data 0xdd1a27/0xe8e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 17072128 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 17072128 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fb97e000/0x0/0x4ffc00000, data 0xdd1a27/0xe8e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 17072128 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 17072128 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 17072128 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083699 data_alloc: 218103808 data_used: 274432
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 17063936 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fb97e000/0x0/0x4ffc00000, data 0xdd1a27/0xe8e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 17063936 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 17063936 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fb97e000/0x0/0x4ffc00000, data 0xdd1a27/0xe8e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 17063936 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 17063936 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086723 data_alloc: 218103808 data_used: 274432
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 17063936 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.712406158s of 14.738780975s, submitted: 18
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 17055744 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 17055744 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 17055744 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fb97e000/0x0/0x4ffc00000, data 0xdd1a27/0xe8e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 17055744 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086591 data_alloc: 218103808 data_used: 274432
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 17055744 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 141 ms_handle_reset con 0x55ea05b23000 session 0x55ea038c6d20
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 141 ms_handle_reset con 0x55ea04ac8800 session 0x55ea03d9e960
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 17055744 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 17055744 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 17055744 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fb97e000/0x0/0x4ffc00000, data 0xdd1a27/0xe8e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 17055744 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086591 data_alloc: 218103808 data_used: 274432
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 17055744 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 17055744 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 17055744 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fb97e000/0x0/0x4ffc00000, data 0xdd1a27/0xe8e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 17055744 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fb97e000/0x0/0x4ffc00000, data 0xdd1a27/0xe8e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 17055744 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086591 data_alloc: 218103808 data_used: 274432
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 17055744 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fb97e000/0x0/0x4ffc00000, data 0xdd1a27/0xe8e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 17055744 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.687610626s of 15.697146416s, submitted: 2
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fb97e000/0x0/0x4ffc00000, data 0xdd1a27/0xe8e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 17055744 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 17055744 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 17055744 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086723 data_alloc: 218103808 data_used: 274432
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 97181696 unmapped: 5545984 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fb97e000/0x0/0x4ffc00000, data 0xdd1a27/0xe8e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 141 ms_handle_reset con 0x55ea0315dc00 session 0x55ea06c8ad20
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 97181696 unmapped: 5545984 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 141 handle_osd_map epochs [142,142], i have 142, src has [1,142]
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 97181696 unmapped: 5545984 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 98902016 unmapped: 5996544 heap: 104898560 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 143 ms_handle_reset con 0x55ea03eb1c00 session 0x55ea066674a0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 143 ms_handle_reset con 0x55ea05b23000 session 0x55ea05bf3a40
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 143 ms_handle_reset con 0x55ea04ca1c00 session 0x55ea06667a40
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fb976000/0x0/0x4ffc00000, data 0xdd5d12/0xe95000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 143 ms_handle_reset con 0x55ea06507c00 session 0x55ea066654a0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 99106816 unmapped: 9469952 heap: 108576768 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 143 ms_handle_reset con 0x55ea0315dc00 session 0x55ea04d0dc20
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178396 data_alloc: 234881024 data_used: 11739136
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fb2c7000/0x0/0x4ffc00000, data 0x1484d12/0x1544000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 99106816 unmapped: 9469952 heap: 108576768 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 99106816 unmapped: 9469952 heap: 108576768 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fb2c7000/0x0/0x4ffc00000, data 0x1484d12/0x1544000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 99106816 unmapped: 9469952 heap: 108576768 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 99106816 unmapped: 9469952 heap: 108576768 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.715719223s of 12.101661682s, submitted: 36
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 99106816 unmapped: 9469952 heap: 108576768 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176038 data_alloc: 234881024 data_used: 11739136
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fb2c8000/0x0/0x4ffc00000, data 0x1484d12/0x1544000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 99106816 unmapped: 9469952 heap: 108576768 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 143 ms_handle_reset con 0x55ea03eb1c00 session 0x55ea04cecf00
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fb2c8000/0x0/0x4ffc00000, data 0x1484d12/0x1544000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 99106816 unmapped: 9469952 heap: 108576768 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb2c4000/0x0/0x4ffc00000, data 0x1486d3a/0x1547000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 99115008 unmapped: 9461760 heap: 108576768 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 102555648 unmapped: 6021120 heap: 108576768 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb2c4000/0x0/0x4ffc00000, data 0x1486d3a/0x1547000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 105463808 unmapped: 3112960 heap: 108576768 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226388 data_alloc: 234881024 data_used: 18665472
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 105463808 unmapped: 3112960 heap: 108576768 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 105463808 unmapped: 3112960 heap: 108576768 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 105463808 unmapped: 3112960 heap: 108576768 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb2c5000/0x0/0x4ffc00000, data 0x1486d3a/0x1547000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 105496576 unmapped: 3080192 heap: 108576768 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 105496576 unmapped: 3080192 heap: 108576768 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226388 data_alloc: 234881024 data_used: 18665472
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 105496576 unmapped: 3080192 heap: 108576768 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb2c5000/0x0/0x4ffc00000, data 0x1486d3a/0x1547000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 105529344 unmapped: 3047424 heap: 108576768 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 105529344 unmapped: 3047424 heap: 108576768 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 105529344 unmapped: 3047424 heap: 108576768 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb2c5000/0x0/0x4ffc00000, data 0x1486d3a/0x1547000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.687959671s of 14.876306534s, submitted: 14
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113434624 unmapped: 5038080 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303308 data_alloc: 234881024 data_used: 19673088
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03eb0000 session 0x55ea06ab2f00
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea06506400 session 0x55ea06ab2d20
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa94c000/0x0/0x4ffc00000, data 0x1dffd3a/0x1ec0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113500160 unmapped: 4972544 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 4759552 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 4759552 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 4759552 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 4759552 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1308934 data_alloc: 234881024 data_used: 19836928
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 4759552 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f972a000/0x0/0x4ffc00000, data 0x1e81d3a/0x1f42000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 4726784 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f972a000/0x0/0x4ffc00000, data 0x1e81d3a/0x1f42000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 4726784 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 4726784 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 4726784 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1308934 data_alloc: 234881024 data_used: 19836928
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 4726784 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.651059151s of 12.785633087s, submitted: 60
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 4743168 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f972a000/0x0/0x4ffc00000, data 0x1e81d3a/0x1f42000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 4743168 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 4743168 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 4743168 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f972a000/0x0/0x4ffc00000, data 0x1e81d3a/0x1f42000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1309218 data_alloc: 234881024 data_used: 19841024
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 4743168 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 4743168 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 4743168 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f972a000/0x0/0x4ffc00000, data 0x1e81d3a/0x1f42000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 4743168 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 4743168 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1309218 data_alloc: 234881024 data_used: 19841024
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 4743168 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea06505800 session 0x55ea03e4fa40
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b0a400 session 0x55ea06c8a000
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 4743168 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 4743168 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.128098488s of 12.134003639s, submitted: 1
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 5537792 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f972a000/0x0/0x4ffc00000, data 0x1e81d3a/0x1f42000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 5537792 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1302979 data_alloc: 234881024 data_used: 19841024
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315dc00 session 0x55ea045ffc20
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03eb0000 session 0x55ea0679d4a0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03eb1c00 session 0x55ea06ab3680
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 5537792 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b0a400 session 0x55ea069e23c0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114507776 unmapped: 3964928 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea06505800 session 0x55ea04cec000
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315dc00 session 0x55ea06c8a1e0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03eb0000 session 0x55ea045e5e00
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03eb1c00 session 0x55ea045fef00
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b0a400 session 0x55ea06acc780
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea06506400 session 0x55ea06846b40
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 115408896 unmapped: 11460608 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 115408896 unmapped: 11460608 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 115408896 unmapped: 11460608 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352704 data_alloc: 234881024 data_used: 20365312
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9180000/0x0/0x4ffc00000, data 0x242ad9c/0x24ec000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315dc00 session 0x55ea038c6780
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9180000/0x0/0x4ffc00000, data 0x242ad9c/0x24ec000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 11444224 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 11444224 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9180000/0x0/0x4ffc00000, data 0x242ad9c/0x24ec000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [0,0,1])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9180000/0x0/0x4ffc00000, data 0x242ad9c/0x24ec000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03eb1c00 session 0x55ea06946960
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 115064832 unmapped: 11804672 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 115064832 unmapped: 11804672 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b0a400 session 0x55ea03ed32c0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.177427292s of 10.367736816s, submitted: 40
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea04657400 session 0x55ea03ea3680
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 11788288 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1354354 data_alloc: 234881024 data_used: 20365312
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 11788288 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 11788288 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f917f000/0x0/0x4ffc00000, data 0x242adac/0x24ed000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 11788288 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 11788288 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 10551296 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1385514 data_alloc: 234881024 data_used: 24961024
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119365632 unmapped: 7503872 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f917f000/0x0/0x4ffc00000, data 0x242adac/0x24ed000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119365632 unmapped: 7503872 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119365632 unmapped: 7503872 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119365632 unmapped: 7503872 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119365632 unmapped: 7503872 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1385514 data_alloc: 234881024 data_used: 24961024
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119365632 unmapped: 7503872 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.596323013s of 12.603732109s, submitted: 2
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119365632 unmapped: 7503872 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f917f000/0x0/0x4ffc00000, data 0x242adac/0x24ed000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119398400 unmapped: 7471104 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119398400 unmapped: 7471104 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119398400 unmapped: 7471104 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1388042 data_alloc: 234881024 data_used: 25010176
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119324672 unmapped: 7544832 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9023000/0x0/0x4ffc00000, data 0x2585dac/0x2648000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 118833152 unmapped: 8036352 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 118833152 unmapped: 8036352 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 118833152 unmapped: 8036352 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8fe6000/0x0/0x4ffc00000, data 0x25c2dac/0x2685000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 118865920 unmapped: 8003584 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1405528 data_alloc: 234881024 data_used: 25051136
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 118865920 unmapped: 8003584 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 118849536 unmapped: 8019968 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.311245918s of 10.453396797s, submitted: 46
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119103488 unmapped: 7766016 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8fc8000/0x0/0x4ffc00000, data 0x25e1dac/0x26a4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119136256 unmapped: 7733248 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119136256 unmapped: 7733248 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b22800 session 0x55ea06accd20
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea06638800 session 0x55ea06acc3c0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403360 data_alloc: 234881024 data_used: 25055232
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 10452992 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315dc00 session 0x55ea04d0a1e0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 10444800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 10444800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 10444800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f935f000/0x0/0x4ffc00000, data 0x1e81d3a/0x1f42000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 10444800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313783 data_alloc: 234881024 data_used: 20365312
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 10444800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f935f000/0x0/0x4ffc00000, data 0x1e81d3a/0x1f42000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 10444800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 10444800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 10444800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f935f000/0x0/0x4ffc00000, data 0x1e81d3a/0x1f42000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.488372803s of 12.584567070s, submitted: 27
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea04ca1c00 session 0x55ea03d9fc20
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b23000 session 0x55ea0679c000
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 10428416 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146657 data_alloc: 234881024 data_used: 12271616
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03eb1c00 session 0x55ea06acdc20
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa7d4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145841 data_alloc: 234881024 data_used: 12271616
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa7d4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa7d4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145841 data_alloc: 234881024 data_used: 12271616
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa7d4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa7d4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145841 data_alloc: 234881024 data_used: 12271616
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa7d4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145841 data_alloc: 234881024 data_used: 12271616
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315dc00 session 0x55ea066670e0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea04ca1c00 session 0x55ea060be960
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b23000 session 0x55ea05c58000
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea06638800 session 0x55ea05bf70e0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.042076111s of 23.134502411s, submitted: 31
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea04657400 session 0x55ea045e4780
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315dc00 session 0x55ea04d0c1e0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea04ca1c00 session 0x55ea04d574a0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110215168 unmapped: 16654336 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b23000 session 0x55ea03ea5e00
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea06638800 session 0x55ea06ab3680
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110215168 unmapped: 16654336 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03eb0000 session 0x55ea069e34a0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa6ce000/0x0/0x4ffc00000, data 0xedcd4a/0xf9e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110215168 unmapped: 16654336 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161065 data_alloc: 234881024 data_used: 12271616
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110215168 unmapped: 16654336 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110215168 unmapped: 16654336 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa6ce000/0x0/0x4ffc00000, data 0xedcd4a/0xf9e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110215168 unmapped: 16654336 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110215168 unmapped: 16654336 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315dc00 session 0x55ea06946000
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110231552 unmapped: 16637952 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161065 data_alloc: 234881024 data_used: 12271616
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea04ca1c00 session 0x55ea06c8b680
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b23000 session 0x55ea06acd860
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110231552 unmapped: 16637952 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea06638800 session 0x55ea06c163c0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa6a9000/0x0/0x4ffc00000, data 0xf00d5a/0xfc3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110542848 unmapped: 16326656 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.804592133s of 10.089959145s, submitted: 14
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 16302080 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 16302080 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa6a9000/0x0/0x4ffc00000, data 0xf00d5a/0xfc3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 16302080 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174152 data_alloc: 234881024 data_used: 13336576
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa6a9000/0x0/0x4ffc00000, data 0xf00d5a/0xfc3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 16302080 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 16302080 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 16302080 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 16302080 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 16302080 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174152 data_alloc: 234881024 data_used: 13336576
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 16302080 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa6a9000/0x0/0x4ffc00000, data 0xf00d5a/0xfc3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 16302080 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 16302080 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea06503c00 session 0x55ea06d60d20
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea044cc400 session 0x55ea04d56b40
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea04ac8800 session 0x55ea038c63c0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315bc00 session 0x55ea045be1e0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa6a9000/0x0/0x4ffc00000, data 0xf00d5a/0xfc3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 16302080 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa6a9000/0x0/0x4ffc00000, data 0xf00d5a/0xfc3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.164918900s of 12.164921761s, submitted: 0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111108096 unmapped: 15761408 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225180 data_alloc: 234881024 data_used: 13484032
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 14311424 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 13516800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 13516800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 13516800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 13516800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233384 data_alloc: 234881024 data_used: 13656064
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0x15c0d5a/0x1683000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 13516800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 13516800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 13516800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 13516800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 13516800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233096 data_alloc: 234881024 data_used: 13656064
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fe6000/0x0/0x4ffc00000, data 0x15c3d5a/0x1686000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 13516800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 13516800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 13516800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 13516800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 13516800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233096 data_alloc: 234881024 data_used: 13656064
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 13516800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fe6000/0x0/0x4ffc00000, data 0x15c3d5a/0x1686000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 13516800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 13516800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 13516800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.687915802s of 19.852085114s, submitted: 57
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 13516800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232984 data_alloc: 234881024 data_used: 13656064
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 13516800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fe5000/0x0/0x4ffc00000, data 0x15c4d5a/0x1687000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113360896 unmapped: 13508608 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113360896 unmapped: 13508608 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b0a400 session 0x55ea066641e0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b22800 session 0x55ea05c583c0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113344512 unmapped: 13524992 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315bc00 session 0x55ea06664000
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 15441920 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155061 data_alloc: 234881024 data_used: 12271616
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 15441920 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 15441920 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 15441920 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 15441920 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 15441920 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155061 data_alloc: 234881024 data_used: 12271616
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 15441920 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 15441920 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 15441920 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 15441920 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 15441920 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155061 data_alloc: 234881024 data_used: 12271616
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 15441920 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 15441920 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 15441920 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 15441920 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 15441920 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155061 data_alloc: 234881024 data_used: 12271616
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 15441920 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 15433728 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 15433728 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 15433728 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 15433728 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155061 data_alloc: 234881024 data_used: 12271616
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 15433728 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 15433728 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 15433728 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 15433728 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea044cc400 session 0x55ea06acde00
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea04ac8800 session 0x55ea06acc5a0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b0a400 session 0x55ea069e30e0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea04ca1c00 session 0x55ea069e2780
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.104520798s of 30.352832794s, submitted: 35
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110657536 unmapped: 18382848 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315bc00 session 0x55ea04d57c20
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea044cc400 session 0x55ea04d0cf00
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea04ac8800 session 0x55ea04d0c3c0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185720 data_alloc: 234881024 data_used: 12271616
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b0a400 session 0x55ea039512c0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b23000 session 0x55ea03efa3c0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa098000/0x0/0x4ffc00000, data 0x1101d73/0x11c4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110682112 unmapped: 18358272 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110682112 unmapped: 18358272 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110682112 unmapped: 18358272 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110682112 unmapped: 18358272 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa098000/0x0/0x4ffc00000, data 0x1101dac/0x11c4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110682112 unmapped: 18358272 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185736 data_alloc: 234881024 data_used: 12271616
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110682112 unmapped: 18358272 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b23000 session 0x55ea03efad20
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110682112 unmapped: 18358272 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa098000/0x0/0x4ffc00000, data 0x1101dac/0x11c4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110804992 unmapped: 18235392 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110624768 unmapped: 18415616 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110624768 unmapped: 18415616 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200328 data_alloc: 234881024 data_used: 14434304
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110624768 unmapped: 18415616 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa098000/0x0/0x4ffc00000, data 0x1101dac/0x11c4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110624768 unmapped: 18415616 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110624768 unmapped: 18415616 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 2803 syncs, 3.86 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1606 writes, 4059 keys, 1606 commit groups, 1.0 writes per commit group, ingest: 3.17 MB, 0.01 MB/s#012Interval WAL: 1606 writes, 729 syncs, 2.20 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110624768 unmapped: 18415616 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110624768 unmapped: 18415616 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200328 data_alloc: 234881024 data_used: 14434304
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110624768 unmapped: 18415616 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa098000/0x0/0x4ffc00000, data 0x1101dac/0x11c4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110624768 unmapped: 18415616 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110624768 unmapped: 18415616 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.779224396s of 18.846429825s, submitted: 29
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 112189440 unmapped: 16850944 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 112680960 unmapped: 16359424 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213650 data_alloc: 234881024 data_used: 14831616
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 17178624 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9edf000/0x0/0x4ffc00000, data 0x12b1dac/0x1374000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9edf000/0x0/0x4ffc00000, data 0x12b1dac/0x1374000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 17178624 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 17178624 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9edf000/0x0/0x4ffc00000, data 0x12b1dac/0x1374000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 17178624 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 17178624 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221340 data_alloc: 234881024 data_used: 14651392
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 17178624 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 17178624 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9edf000/0x0/0x4ffc00000, data 0x12b1dac/0x1374000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 17178624 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 17178624 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 17178624 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221340 data_alloc: 234881024 data_used: 14651392
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 17178624 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 17178624 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9edf000/0x0/0x4ffc00000, data 0x12b1dac/0x1374000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 17178624 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 17178624 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 17178624 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221340 data_alloc: 234881024 data_used: 14651392
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 17178624 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 17178624 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 17178624 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9edf000/0x0/0x4ffc00000, data 0x12b1dac/0x1374000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 17178624 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 17178624 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221340 data_alloc: 234881024 data_used: 14651392
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.352136612s of 22.470682144s, submitted: 49
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 17178624 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea06ef6400 session 0x55ea045e50e0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03eb0800 session 0x55ea045e4780
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea046ff800 session 0x55ea05bf70e0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05803c00 session 0x55ea06acc780
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03eb0800 session 0x55ea06accd20
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 29753344 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f935d000/0x0/0x4ffc00000, data 0x1e3cdac/0x1eff000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 29753344 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 29753344 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 29753344 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1308705 data_alloc: 234881024 data_used: 14655488
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113000448 unmapped: 29687808 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f935d000/0x0/0x4ffc00000, data 0x1e3cdac/0x1eff000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113000448 unmapped: 29687808 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea046ff800 session 0x55ea0679d4a0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113000448 unmapped: 29687808 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113016832 unmapped: 29671424 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113016832 unmapped: 29671424 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1308837 data_alloc: 234881024 data_used: 14655488
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113016832 unmapped: 29671424 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113016832 unmapped: 29671424 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f935d000/0x0/0x4ffc00000, data 0x1e3cdac/0x1eff000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 21880832 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 21880832 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f935d000/0x0/0x4ffc00000, data 0x1e3cdac/0x1eff000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 21880832 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1380429 data_alloc: 234881024 data_used: 24752128
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 21880832 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 21880832 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f935d000/0x0/0x4ffc00000, data 0x1e3cdac/0x1eff000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120840192 unmapped: 21848064 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120840192 unmapped: 21848064 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f935d000/0x0/0x4ffc00000, data 0x1e3cdac/0x1eff000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120840192 unmapped: 21848064 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1380429 data_alloc: 234881024 data_used: 24752128
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120840192 unmapped: 21848064 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120840192 unmapped: 21848064 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f935d000/0x0/0x4ffc00000, data 0x1e3cdac/0x1eff000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.788692474s of 21.901130676s, submitted: 31
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124010496 unmapped: 18677760 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124698624 unmapped: 17989632 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126066688 unmapped: 16621568 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1460577 data_alloc: 234881024 data_used: 25886720
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8969000/0x0/0x4ffc00000, data 0x282fdac/0x28f2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124379136 unmapped: 18309120 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124379136 unmapped: 18309120 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124379136 unmapped: 18309120 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124387328 unmapped: 18300928 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124387328 unmapped: 18300928 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1459105 data_alloc: 234881024 data_used: 25972736
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8960000/0x0/0x4ffc00000, data 0x2839dac/0x28fc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124657664 unmapped: 18030592 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b23000 session 0x55ea0679de00
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea06ef6400 session 0x55ea04d56780
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124649472 unmapped: 18038784 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b0a800 session 0x55ea03e850e0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 26386432 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 26386432 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 26386432 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228581 data_alloc: 234881024 data_used: 14196736
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9ee8000/0x0/0x4ffc00000, data 0x12b1dac/0x1374000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 26386432 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 26386432 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.157867432s of 14.509943008s, submitted: 144
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315bc00 session 0x55ea03ea25a0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea044cc400 session 0x55ea03e4f680
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 27934720 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03eb0800 session 0x55ea04d563c0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9f35000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 27926528 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 27926528 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174001 data_alloc: 234881024 data_used: 11812864
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 27926528 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 27926528 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9f35000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 27926528 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 27926528 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 27926528 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174001 data_alloc: 234881024 data_used: 11812864
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 27926528 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 27926528 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9f35000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 27926528 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 27926528 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 27926528 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174001 data_alloc: 234881024 data_used: 11812864
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9f35000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 27926528 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 27926528 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 28278784 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9f35000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 28278784 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 28278784 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174001 data_alloc: 234881024 data_used: 11812864
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 28278784 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 28278784 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 28278784 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 28278784 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9f35000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 28278784 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174001 data_alloc: 234881024 data_used: 11812864
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 28278784 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 28278784 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 28278784 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.028423309s of 26.106794357s, submitted: 24
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea046ff800 session 0x55ea066654a0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b23000 session 0x55ea06c17a40
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315bc00 session 0x55ea048ffc20
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03eb0800 session 0x55ea06d60b40
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea044cc400 session 0x55ea048fed20
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 27230208 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f99f5000/0x0/0x4ffc00000, data 0x17a5d9c/0x1867000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 27230208 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259905 data_alloc: 234881024 data_used: 11812864
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 27230208 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 27230208 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f99f5000/0x0/0x4ffc00000, data 0x17a5d9c/0x1867000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea046ff800 session 0x55ea06d61680
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 115089408 unmapped: 27598848 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 115089408 unmapped: 27598848 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 117022720 unmapped: 25665536 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310329 data_alloc: 234881024 data_used: 19058688
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 24829952 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 24829952 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f99d1000/0x0/0x4ffc00000, data 0x17c9d9c/0x188b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 24829952 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 24829952 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 24829952 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321121 data_alloc: 234881024 data_used: 20672512
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 24829952 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 24829952 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 24829952 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f99d1000/0x0/0x4ffc00000, data 0x17c9d9c/0x188b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 24829952 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 24829952 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321577 data_alloc: 234881024 data_used: 20684800
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.084711075s of 17.252138138s, submitted: 53
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124837888 unmapped: 17850368 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f908f000/0x0/0x4ffc00000, data 0x210bd9c/0x21cd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 125198336 unmapped: 17489920 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 18743296 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124018688 unmapped: 18669568 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9006000/0x0/0x4ffc00000, data 0x2193d9c/0x2255000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 18636800 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1414765 data_alloc: 234881024 data_used: 22802432
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 18636800 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 18636800 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9006000/0x0/0x4ffc00000, data 0x2193d9c/0x2255000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 18604032 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9006000/0x0/0x4ffc00000, data 0x2193d9c/0x2255000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 18604032 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1412829 data_alloc: 234881024 data_used: 22814720
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 18604032 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8fe6000/0x0/0x4ffc00000, data 0x21b4d9c/0x2276000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124092416 unmapped: 18595840 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124092416 unmapped: 18595840 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8fe6000/0x0/0x4ffc00000, data 0x21b4d9c/0x2276000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124092416 unmapped: 18595840 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8fe6000/0x0/0x4ffc00000, data 0x21b4d9c/0x2276000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.732014656s of 13.976600647s, submitted: 118
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124092416 unmapped: 18595840 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1413077 data_alloc: 234881024 data_used: 22814720
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124092416 unmapped: 18595840 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124092416 unmapped: 18595840 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8fe0000/0x0/0x4ffc00000, data 0x21bad9c/0x227c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8fe0000/0x0/0x4ffc00000, data 0x21bad9c/0x227c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124092416 unmapped: 18595840 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8fe0000/0x0/0x4ffc00000, data 0x21bad9c/0x227c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124100608 unmapped: 18587648 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124100608 unmapped: 18587648 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1414141 data_alloc: 234881024 data_used: 22843392
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124108800 unmapped: 18579456 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8fe0000/0x0/0x4ffc00000, data 0x21bad9c/0x227c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124256256 unmapped: 18432000 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124256256 unmapped: 18432000 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124256256 unmapped: 18432000 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124256256 unmapped: 18432000 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.906300545s of 10.919371605s, submitted: 4
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417089 data_alloc: 234881024 data_used: 22843392
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b25800 session 0x55ea04ce3860
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315bc00 session 0x55ea04ce3a40
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03eb0800 session 0x55ea04ce34a0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea044cc400 session 0x55ea04ce2000
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea046ff800 session 0x55ea04ce3e00
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124624896 unmapped: 18063360 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124624896 unmapped: 18063360 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f85dd000/0x0/0x4ffc00000, data 0x2bbdd9c/0x2c7f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 18055168 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea04a11000 session 0x55ea04ce32c0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 18055168 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315bc00 session 0x55ea045e4960
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03eb0800 session 0x55ea06665860
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 18055168 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea044cc400 session 0x55ea069e3c20
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1490465 data_alloc: 234881024 data_used: 22843392
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 18046976 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f85dc000/0x0/0x4ffc00000, data 0x2bbddac/0x2c80000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124968960 unmapped: 17719296 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 133939200 unmapped: 8749056 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 8658944 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 8658944 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f85dc000/0x0/0x4ffc00000, data 0x2bbddac/0x2c80000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1560385 data_alloc: 251658240 data_used: 33210368
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 134062080 unmapped: 8626176 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 134062080 unmapped: 8626176 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f85dc000/0x0/0x4ffc00000, data 0x2bbddac/0x2c80000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.262975693s of 12.345156670s, submitted: 14
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 134209536 unmapped: 8478720 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 134209536 unmapped: 8478720 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f85d8000/0x0/0x4ffc00000, data 0x2bc1dac/0x2c84000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 134242304 unmapped: 8445952 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f85d8000/0x0/0x4ffc00000, data 0x2bc1dac/0x2c84000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1560113 data_alloc: 251658240 data_used: 33210368
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 134275072 unmapped: 8413184 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 134291456 unmapped: 8396800 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 138690560 unmapped: 6103040 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 138829824 unmapped: 5963776 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139681792 unmapped: 5111808 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7e54000/0x0/0x4ffc00000, data 0x333ddac/0x3400000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1634445 data_alloc: 251658240 data_used: 33914880
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139124736 unmapped: 5668864 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea044cd000 session 0x55ea03efbe00
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139132928 unmapped: 5660672 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139132928 unmapped: 5660672 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139132928 unmapped: 5660672 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.642090797s of 11.800820351s, submitted: 440
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7e5c000/0x0/0x4ffc00000, data 0x333ddac/0x3400000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139132928 unmapped: 5660672 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1635745 data_alloc: 251658240 data_used: 33914880
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139132928 unmapped: 5660672 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139132928 unmapped: 5660672 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139141120 unmapped: 5652480 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139141120 unmapped: 5652480 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139141120 unmapped: 5652480 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7e56000/0x0/0x4ffc00000, data 0x3343dac/0x3406000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1635745 data_alloc: 251658240 data_used: 33914880
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139141120 unmapped: 5652480 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139141120 unmapped: 5652480 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139141120 unmapped: 5652480 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139141120 unmapped: 5652480 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139149312 unmapped: 5644288 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1635345 data_alloc: 251658240 data_used: 33914880
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7e51000/0x0/0x4ffc00000, data 0x3348dac/0x340b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139149312 unmapped: 5644288 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.756099701s of 11.783482552s, submitted: 8
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139149312 unmapped: 5644288 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7e4c000/0x0/0x4ffc00000, data 0x334ddac/0x3410000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139149312 unmapped: 5644288 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139149312 unmapped: 5644288 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139149312 unmapped: 5644288 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1635385 data_alloc: 251658240 data_used: 33914880
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139149312 unmapped: 5644288 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139173888 unmapped: 5619712 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7e49000/0x0/0x4ffc00000, data 0x3350dac/0x3413000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139173888 unmapped: 5619712 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139182080 unmapped: 5611520 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139182080 unmapped: 5611520 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1635677 data_alloc: 251658240 data_used: 33914880
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139182080 unmapped: 5611520 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.557255745s of 10.573647499s, submitted: 4
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7e49000/0x0/0x4ffc00000, data 0x3350dac/0x3413000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139182080 unmapped: 5611520 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139182080 unmapped: 5611520 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139182080 unmapped: 5611520 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139190272 unmapped: 5603328 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1636225 data_alloc: 251658240 data_used: 33914880
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139190272 unmapped: 5603328 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7e44000/0x0/0x4ffc00000, data 0x3354dac/0x3417000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139190272 unmapped: 5603328 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139190272 unmapped: 5603328 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7e44000/0x0/0x4ffc00000, data 0x3354dac/0x3417000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139214848 unmapped: 5578752 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139214848 unmapped: 5578752 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1635993 data_alloc: 251658240 data_used: 33914880
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139214848 unmapped: 5578752 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7e40000/0x0/0x4ffc00000, data 0x3359dac/0x341c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139214848 unmapped: 5578752 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139223040 unmapped: 5570560 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139223040 unmapped: 5570560 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.444334030s of 12.466792107s, submitted: 6
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139223040 unmapped: 5570560 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7e3d000/0x0/0x4ffc00000, data 0x335cdac/0x341f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1636017 data_alloc: 251658240 data_used: 33914880
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139223040 unmapped: 5570560 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139231232 unmapped: 5562368 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139231232 unmapped: 5562368 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139231232 unmapped: 5562368 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139231232 unmapped: 5562368 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1636169 data_alloc: 251658240 data_used: 33914880
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139231232 unmapped: 5562368 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7e3a000/0x0/0x4ffc00000, data 0x335fdac/0x3422000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139239424 unmapped: 5554176 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea046ff800 session 0x55ea06acd2c0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b0b800 session 0x55ea04d57a40
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b0b800 session 0x55ea03951680
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 131776512 unmapped: 13017088 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8fab000/0x0/0x4ffc00000, data 0x21efd9c/0x22b1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 131776512 unmapped: 13017088 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 131776512 unmapped: 13017088 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.946741104s of 11.008224487s, submitted: 24
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1428990 data_alloc: 234881024 data_used: 22908928
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 12976128 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 12976128 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 12976128 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8fa6000/0x0/0x4ffc00000, data 0x21f4d9c/0x22b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 12976128 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 12976128 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8fa6000/0x0/0x4ffc00000, data 0x21f4d9c/0x22b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1428990 data_alloc: 234881024 data_used: 22908928
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8fa6000/0x0/0x4ffc00000, data 0x21f4d9c/0x22b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 12976128 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 12976128 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8fa6000/0x0/0x4ffc00000, data 0x21f4d9c/0x22b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 12976128 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea06ef6400 session 0x55ea06d610e0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05c63800 session 0x55ea045e5e00
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 131833856 unmapped: 12959744 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 121495552 unmapped: 23298048 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.918642998s of 10.027234077s, submitted: 43
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315bc00 session 0x55ea069e30e0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198332 data_alloc: 234881024 data_used: 11812864
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198332 data_alloc: 234881024 data_used: 11812864
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198332 data_alloc: 234881024 data_used: 11812864
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198332 data_alloc: 234881024 data_used: 11812864
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198332 data_alloc: 234881024 data_used: 11812864
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198332 data_alloc: 234881024 data_used: 11812864
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.011581421s of 28.023063660s, submitted: 4
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03eb0800 session 0x55ea03efa5a0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03eb0800 session 0x55ea06ab2780
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315bc00 session 0x55ea069e2b40
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b0b800 session 0x55ea06947680
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05c63800 session 0x55ea03ea4f00
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120397824 unmapped: 26509312 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120397824 unmapped: 26509312 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9f30000/0x0/0x4ffc00000, data 0x126bd3a/0x132c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234174 data_alloc: 234881024 data_used: 11812864
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9f30000/0x0/0x4ffc00000, data 0x126bd3a/0x132c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea06ef6400 session 0x55ea03ea4b40
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120397824 unmapped: 26509312 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea06ef6400 session 0x55ea04d4fe00
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315bc00 session 0x55ea068465a0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120397824 unmapped: 26509312 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03eb0800 session 0x55ea069e3c20
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 26148864 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9f0b000/0x0/0x4ffc00000, data 0x128fd4a/0x1351000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 26148864 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 26148864 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9f0b000/0x0/0x4ffc00000, data 0x128fd4a/0x1351000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1249600 data_alloc: 234881024 data_used: 13393920
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 26148864 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 26148864 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9f0b000/0x0/0x4ffc00000, data 0x128fd4a/0x1351000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 26148864 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 26148864 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 26148864 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b0b800 session 0x55ea069e32c0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05c63800 session 0x55ea069463c0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263432 data_alloc: 234881024 data_used: 15491072
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.155310631s of 12.628594398s, submitted: 11
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315bc00 session 0x55ea039505a0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119816192 unmapped: 27090944 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea04ac9c00 session 0x55ea06acda40
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea04a10800 session 0x55ea06664d20
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119816192 unmapped: 27090944 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119816192 unmapped: 27090944 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119816192 unmapped: 27090944 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119816192 unmapped: 27090944 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203077 data_alloc: 234881024 data_used: 11812864
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119816192 unmapped: 27090944 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119816192 unmapped: 27090944 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119816192 unmapped: 27090944 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119816192 unmapped: 27090944 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119816192 unmapped: 27090944 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203077 data_alloc: 234881024 data_used: 11812864
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119816192 unmapped: 27090944 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119816192 unmapped: 27090944 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119816192 unmapped: 27090944 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.648719788s of 12.687404633s, submitted: 13
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b0b800 session 0x55ea045e4780
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea06ef6400 session 0x55ea06c170e0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea044cc400 session 0x55ea06c16000
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea046ff800 session 0x55ea06c174a0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea046ff800 session 0x55ea06c16b40
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f993b000/0x0/0x4ffc00000, data 0x1860d3a/0x1921000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 31768576 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 31768576 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1285125 data_alloc: 234881024 data_used: 11812864
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120389632 unmapped: 31760384 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f993b000/0x0/0x4ffc00000, data 0x1860d3a/0x1921000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120389632 unmapped: 31760384 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315bc00 session 0x55ea04ce3a40
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120389632 unmapped: 31760384 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120389632 unmapped: 31760384 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120389632 unmapped: 31760384 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1285125 data_alloc: 234881024 data_used: 11812864
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f993b000/0x0/0x4ffc00000, data 0x1860d3a/0x1921000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 31858688 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 123043840 unmapped: 29106176 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 123043840 unmapped: 29106176 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 123043840 unmapped: 29106176 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f993b000/0x0/0x4ffc00000, data 0x1860d3a/0x1921000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea044cc400 session 0x55ea04ce34a0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b0b800 session 0x55ea04d565a0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f993b000/0x0/0x4ffc00000, data 0x1860d3a/0x1921000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 123043840 unmapped: 29106176 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.941737175s of 12.008414268s, submitted: 14
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209885 data_alloc: 234881024 data_used: 11812864
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b22000 session 0x55ea03e84b40
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209885 data_alloc: 234881024 data_used: 11812864
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209885 data_alloc: 234881024 data_used: 11812864
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209885 data_alloc: 234881024 data_used: 11812864
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209885 data_alloc: 234881024 data_used: 11812864
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209885 data_alloc: 234881024 data_used: 11812864
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209885 data_alloc: 234881024 data_used: 11812864
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 31.909063339s of 32.147178650s, submitted: 15
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b22000 session 0x55ea03e850e0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315bc00 session 0x55ea03ea3680
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea044cc400 session 0x55ea03ea34a0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119357440 unmapped: 44343296 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea046ff800 session 0x55ea03ea25a0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b0b800 session 0x55ea03ea30e0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119357440 unmapped: 44343296 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119357440 unmapped: 44343296 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283741 data_alloc: 234881024 data_used: 11812864
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b0b800 session 0x55ea03ea2d20
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119357440 unmapped: 44343296 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f98fa000/0x0/0x4ffc00000, data 0x18a1d3a/0x1962000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315bc00 session 0x55ea05bf34a0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea044cc400 session 0x55ea05bf2960
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119365632 unmapped: 44335104 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea046ff800 session 0x55ea069e25a0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119668736 unmapped: 44032000 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 44015616 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120930304 unmapped: 42770432 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f98d5000/0x0/0x4ffc00000, data 0x18c5d4a/0x1987000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1364823 data_alloc: 234881024 data_used: 22085632
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f98d5000/0x0/0x4ffc00000, data 0x18c5d4a/0x1987000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 42221568 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 42221568 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 42221568 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 42221568 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f98d5000/0x0/0x4ffc00000, data 0x18c5d4a/0x1987000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 42221568 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1364823 data_alloc: 234881024 data_used: 22085632
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 42221568 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 42221568 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f98d5000/0x0/0x4ffc00000, data 0x18c5d4a/0x1987000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 42221568 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f98d5000/0x0/0x4ffc00000, data 0x18c5d4a/0x1987000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 42221568 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 42221568 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.289739609s of 17.994153976s, submitted: 9
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1399031 data_alloc: 234881024 data_used: 22175744
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 125911040 unmapped: 37789696 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 125911040 unmapped: 37789696 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9493000/0x0/0x4ffc00000, data 0x1d07d4a/0x1dc9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [0,0,0,0,0,2,0,0,4])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9493000/0x0/0x4ffc00000, data 0x1d07d4a/0x1dc9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 125960192 unmapped: 37740544 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 125976576 unmapped: 37724160 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 125976576 unmapped: 37724160 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9487000/0x0/0x4ffc00000, data 0x1d13d4a/0x1dd5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1408059 data_alloc: 234881024 data_used: 22564864
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 125976576 unmapped: 37724160 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9487000/0x0/0x4ffc00000, data 0x1d13d4a/0x1dd5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 125976576 unmapped: 37724160 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 125976576 unmapped: 37724160 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 125976576 unmapped: 37724160 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9487000/0x0/0x4ffc00000, data 0x1d13d4a/0x1dd5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 125976576 unmapped: 37724160 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1408059 data_alloc: 234881024 data_used: 22564864
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 125976576 unmapped: 37724160 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 125976576 unmapped: 37724160 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 125976576 unmapped: 37724160 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 125976576 unmapped: 37724160 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b22000 session 0x55ea069e2000
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03f3ec00 session 0x55ea04d0c5a0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 125976576 unmapped: 37724160 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1408059 data_alloc: 234881024 data_used: 22564864
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.056972504s of 15.507387161s, submitted: 28
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9487000/0x0/0x4ffc00000, data 0x1d13d4a/0x1dd5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa39f000/0x0/0x4ffc00000, data 0xdfbd4a/0xebd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b22000 session 0x55ea069e34a0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219886 data_alloc: 234881024 data_used: 10764288
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219886 data_alloc: 234881024 data_used: 10764288
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219886 data_alloc: 234881024 data_used: 10764288
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219886 data_alloc: 234881024 data_used: 10764288
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219886 data_alloc: 234881024 data_used: 10764288
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.235197067s of 26.320636749s, submitted: 18
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315bc00 session 0x55ea03e845a0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 118865920 unmapped: 44834816 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea044cc400 session 0x55ea03ea4f00
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea046ff800 session 0x55ea06c8be00
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea046ff800 session 0x55ea03ed32c0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315bc00 session 0x55ea03e843c0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 118865920 unmapped: 44834816 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9880000/0x0/0x4ffc00000, data 0x191bd3a/0x19dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 118865920 unmapped: 44834816 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 118865920 unmapped: 44834816 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306858 data_alloc: 234881024 data_used: 10764288
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 118865920 unmapped: 44834816 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 118865920 unmapped: 44834816 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03f3ec00 session 0x55ea069e2780
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 118865920 unmapped: 44834816 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 118865920 unmapped: 44834816 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f987f000/0x0/0x4ffc00000, data 0x191bd5d/0x19dd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 121454592 unmapped: 42246144 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1380167 data_alloc: 234881024 data_used: 21360640
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 122642432 unmapped: 41058304 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f987f000/0x0/0x4ffc00000, data 0x191bd5d/0x19dd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 122642432 unmapped: 41058304 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 122642432 unmapped: 41058304 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 122642432 unmapped: 41058304 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 122642432 unmapped: 41058304 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1380167 data_alloc: 234881024 data_used: 21360640
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f987f000/0x0/0x4ffc00000, data 0x191bd5d/0x19dd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 122642432 unmapped: 41058304 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 122642432 unmapped: 41058304 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f987f000/0x0/0x4ffc00000, data 0x191bd5d/0x19dd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 122642432 unmapped: 41058304 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 122642432 unmapped: 41058304 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 122642432 unmapped: 41058304 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f987f000/0x0/0x4ffc00000, data 0x191bd5d/0x19dd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.510463715s of 18.653633118s, submitted: 18
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1462413 data_alloc: 234881024 data_used: 21397504
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 130695168 unmapped: 33005568 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 130891776 unmapped: 32808960 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 129646592 unmapped: 34054144 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b0b800 session 0x55ea045be3c0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea06ef9400 session 0x55ea06acd4a0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 130170880 unmapped: 33529856 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315bc00 session 0x55ea05bf3a40
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03f3ec00 session 0x55ea06847c20
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea046ff800 session 0x55ea06946d20
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 130187264 unmapped: 33513472 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8764000/0x0/0x4ffc00000, data 0x2a36d5d/0x2af8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1526265 data_alloc: 234881024 data_used: 21716992
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 130187264 unmapped: 33513472 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8764000/0x0/0x4ffc00000, data 0x2a36d5d/0x2af8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 130220032 unmapped: 33480704 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 130220032 unmapped: 33480704 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 130220032 unmapped: 33480704 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b0b800 session 0x55ea03efab40
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 130220032 unmapped: 33480704 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1526281 data_alloc: 234881024 data_used: 21716992
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 130220032 unmapped: 33480704 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8764000/0x0/0x4ffc00000, data 0x2a36d5d/0x2af8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8764000/0x0/0x4ffc00000, data 0x2a36d5d/0x2af8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 130359296 unmapped: 33341440 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 30244864 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 30244864 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8764000/0x0/0x4ffc00000, data 0x2a36d5d/0x2af8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 133578752 unmapped: 30121984 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1560633 data_alloc: 234881024 data_used: 26701824
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 133578752 unmapped: 30121984 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 133578752 unmapped: 30121984 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 133578752 unmapped: 30121984 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 133578752 unmapped: 30121984 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8764000/0x0/0x4ffc00000, data 0x2a36d5d/0x2af8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 133611520 unmapped: 30089216 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1560633 data_alloc: 234881024 data_used: 26701824
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 133611520 unmapped: 30089216 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 133611520 unmapped: 30089216 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.776948929s of 22.069581985s, submitted: 105
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 24207360 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7a4b000/0x0/0x4ffc00000, data 0x374fd5d/0x3811000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139476992 unmapped: 24223744 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 140001280 unmapped: 23699456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1667361 data_alloc: 251658240 data_used: 28028928
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 140001280 unmapped: 23699456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 140001280 unmapped: 23699456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f799e000/0x0/0x4ffc00000, data 0x37fbd5d/0x38bd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 140001280 unmapped: 23699456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 140001280 unmapped: 23699456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 140156928 unmapped: 23543808 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f797e000/0x0/0x4ffc00000, data 0x381cd5d/0x38de000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1664897 data_alloc: 251658240 data_used: 28028928
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 23535616 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 23535616 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f797e000/0x0/0x4ffc00000, data 0x381cd5d/0x38de000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.803740501s of 10.597007751s, submitted: 132
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 23535616 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f797e000/0x0/0x4ffc00000, data 0x381cd5d/0x38de000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 23535616 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f797c000/0x0/0x4ffc00000, data 0x381dd5d/0x38df000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 140173312 unmapped: 23527424 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1665233 data_alloc: 251658240 data_used: 28028928
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 140345344 unmapped: 23355392 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 140345344 unmapped: 23355392 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea06ef6000 session 0x55ea05c58f00
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea06a38c00 session 0x55ea04d0c780
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f796e000/0x0/0x4ffc00000, data 0x382cd5d/0x38ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 23371776 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315bc00 session 0x55ea05bf3680
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 134799360 unmapped: 28901376 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 134799360 unmapped: 28901376 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1479151 data_alloc: 234881024 data_used: 21716992
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 134799360 unmapped: 28901376 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8d96000/0x0/0x4ffc00000, data 0x2404d5d/0x24c6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea044cc400 session 0x55ea03d9ef00
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b22000 session 0x55ea06ab2b40
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 134799360 unmapped: 28901376 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.386573792s of 10.000718117s, submitted: 44
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8d96000/0x0/0x4ffc00000, data 0x2404d5d/0x24c6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128729088 unmapped: 34971648 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03f3ec00 session 0x55ea04d0b680
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c3000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246023 data_alloc: 234881024 data_used: 10764288
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c3000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c3000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246023 data_alloc: 234881024 data_used: 10764288
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c3000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246023 data_alloc: 234881024 data_used: 10764288
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c3000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246023 data_alloc: 234881024 data_used: 10764288
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c3000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c3000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246023 data_alloc: 234881024 data_used: 10764288
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03f3ec00 session 0x55ea06d60b40
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315bc00 session 0x55ea04ced4a0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea044cc400 session 0x55ea0679cb40
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b22000 session 0x55ea03efa000
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.284482956s of 24.468833923s, submitted: 10
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea06a38c00 session 0x55ea04d0c1e0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea06a38c00 session 0x55ea05bf23c0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315bc00 session 0x55ea0679c3c0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03f3ec00 session 0x55ea048ff680
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea044cc400 session 0x55ea06d605a0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127016960 unmapped: 36683776 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127016960 unmapped: 36683776 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9cfe000/0x0/0x4ffc00000, data 0x149bdac/0x155e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127016960 unmapped: 36683776 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9cfe000/0x0/0x4ffc00000, data 0x149bdac/0x155e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305495 data_alloc: 234881024 data_used: 10764288
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127016960 unmapped: 36683776 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127016960 unmapped: 36683776 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127016960 unmapped: 36683776 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b22000 session 0x55ea048fe5a0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127328256 unmapped: 36372480 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9cd9000/0x0/0x4ffc00000, data 0x14bfdcf/0x1583000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 36560896 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1346676 data_alloc: 234881024 data_used: 16285696
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 36560896 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 36560896 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 36560896 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 36560896 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 36560896 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1349260 data_alloc: 234881024 data_used: 16650240
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9cd9000/0x0/0x4ffc00000, data 0x14bfdcf/0x1583000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 36560896 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 36560896 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 36560896 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 36560896 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 36560896 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.059680939s of 18.250682831s, submitted: 49
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387840 data_alloc: 234881024 data_used: 16707584
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 131407872 unmapped: 32292864 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f942d000/0x0/0x4ffc00000, data 0x1953dcf/0x1a17000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 129753088 unmapped: 33947648 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 129753088 unmapped: 33947648 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9425000/0x0/0x4ffc00000, data 0x1962dcf/0x1a26000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 129753088 unmapped: 33947648 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 129753088 unmapped: 33947648 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1395610 data_alloc: 234881024 data_used: 17113088
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 129753088 unmapped: 33947648 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 129753088 unmapped: 33947648 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9425000/0x0/0x4ffc00000, data 0x1962dcf/0x1a26000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 130801664 unmapped: 32899072 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 130801664 unmapped: 32899072 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 130801664 unmapped: 32899072 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9425000/0x0/0x4ffc00000, data 0x1962dcf/0x1a26000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1395626 data_alloc: 234881024 data_used: 17113088
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 130801664 unmapped: 32899072 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 130801664 unmapped: 32899072 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 130801664 unmapped: 32899072 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 130801664 unmapped: 32899072 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9425000/0x0/0x4ffc00000, data 0x1962dcf/0x1a26000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 130809856 unmapped: 32890880 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.912492752s of 15.101532936s, submitted: 72
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b22000 session 0x55ea04d4e1e0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315bc00 session 0x55ea06ab2d20
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1393482 data_alloc: 234881024 data_used: 17117184
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 130809856 unmapped: 32890880 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea044cc400 session 0x55ea03ed32c0
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 234881024 data_used: 10764288
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 234881024 data_used: 10764288
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 234881024 data_used: 10764288
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 234881024 data_used: 10764288
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 36823040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 36823040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 234881024 data_used: 10764288
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 36823040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 36823040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 36823040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 36823040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 36823040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 234881024 data_used: 10764288
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 234881024 data_used: 10764288
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 234881024 data_used: 10764288
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 234881024 data_used: 10764288
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 234881024 data_used: 10764288
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 234881024 data_used: 10764288
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 234881024 data_used: 10764288
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 234881024 data_used: 10764288
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 234881024 data_used: 10764288
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126902272 unmapped: 36798464 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126902272 unmapped: 36798464 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126902272 unmapped: 36798464 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126902272 unmapped: 36798464 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126902272 unmapped: 36798464 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 234881024 data_used: 10764288
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126902272 unmapped: 36798464 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126902272 unmapped: 36798464 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126902272 unmapped: 36798464 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126910464 unmapped: 36790272 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126910464 unmapped: 36790272 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 234881024 data_used: 10764288
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126910464 unmapped: 36790272 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126910464 unmapped: 36790272 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126910464 unmapped: 36790272 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126910464 unmapped: 36790272 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126910464 unmapped: 36790272 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 218103808 data_used: 10764288
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126959616 unmapped: 36741120 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: do_command 'config diff' '{prefix=config diff}'
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: do_command 'config show' '{prefix=config show}'
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: do_command 'counter dump' '{prefix=counter dump}'
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: do_command 'counter schema' '{prefix=counter schema}'
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127180800 unmapped: 36519936 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126656512 unmapped: 37044224 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:28:05 np0005540825 ceph-osd[82809]: do_command 'log dump' '{prefix=log dump}'
Dec  1 05:28:05 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.16551 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:06 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26327 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:06 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.25828 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:06 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.16569 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:06 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1126: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:28:06 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26345 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:06 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.25840 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:06 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec  1 05:28:06 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1374163346' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec  1 05:28:06 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.16584 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:28:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:28:06.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:28:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:28:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:28:06.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:28:06 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26363 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:06 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.25855 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:07 np0005540825 nova_compute[256151]: 2025-12-01 10:28:07.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:28:07 np0005540825 nova_compute[256151]: 2025-12-01 10:28:07.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:28:07 np0005540825 nova_compute[256151]: 2025-12-01 10:28:07.028 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 05:28:07 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Dec  1 05:28:07 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2805994041' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec  1 05:28:07 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.16596 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:07 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26378 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:28:07.351Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:28:07 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.25879 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:07 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.16620 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:07 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26408 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:07 np0005540825 podman[282790]: 2025-12-01 10:28:07.718069486 +0000 UTC m=+0.066379503 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 05:28:07 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.25897 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:07 np0005540825 nova_compute[256151]: 2025-12-01 10:28:07.802 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:28:07 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.16626 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:08 np0005540825 nova_compute[256151]: 2025-12-01 10:28:08.026 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:28:08 np0005540825 nova_compute[256151]: 2025-12-01 10:28:08.069 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:28:08 np0005540825 nova_compute[256151]: 2025-12-01 10:28:08.070 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:28:08 np0005540825 nova_compute[256151]: 2025-12-01 10:28:08.070 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:28:08 np0005540825 nova_compute[256151]: 2025-12-01 10:28:08.070 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 05:28:08 np0005540825 nova_compute[256151]: 2025-12-01 10:28:08.070 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:28:08 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Dec  1 05:28:08 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3253431688' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec  1 05:28:08 np0005540825 nova_compute[256151]: 2025-12-01 10:28:08.251 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:28:08 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.16641 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:08 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1127: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:28:08 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Dec  1 05:28:08 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4130972969' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec  1 05:28:08 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:28:08 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2428953660' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:28:08 np0005540825 nova_compute[256151]: 2025-12-01 10:28:08.548 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:28:08 np0005540825 nova_compute[256151]: 2025-12-01 10:28:08.728 256155 WARNING nova.virt.libvirt.driver [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 05:28:08 np0005540825 nova_compute[256151]: 2025-12-01 10:28:08.729 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4395MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 05:28:08 np0005540825 nova_compute[256151]: 2025-12-01 10:28:08.730 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:28:08 np0005540825 nova_compute[256151]: 2025-12-01 10:28:08.730 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:28:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:28:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:28:08.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:28:08 np0005540825 nova_compute[256151]: 2025-12-01 10:28:08.802 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 05:28:08 np0005540825 nova_compute[256151]: 2025-12-01 10:28:08.802 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 05:28:08 np0005540825 nova_compute[256151]: 2025-12-01 10:28:08.819 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:28:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:28:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:28:08.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:28:08 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.16668 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:28:08.869Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:28:08 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Dec  1 05:28:08 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/668896965' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec  1 05:28:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:28:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:28:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:28:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:09 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:28:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:28:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2057561117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:28:09 np0005540825 nova_compute[256151]: 2025-12-01 10:28:09.331 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:28:09 np0005540825 nova_compute[256151]: 2025-12-01 10:28:09.337 256155 DEBUG nova.compute.provider_tree [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed in ProviderTree for provider: 5efe20fe-1981-4bd9-8786-d9fddc89a5ae update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 05:28:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Dec  1 05:28:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1435054120' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec  1 05:28:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Dec  1 05:28:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2573024948' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec  1 05:28:09 np0005540825 nova_compute[256151]: 2025-12-01 10:28:09.353 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 05:28:09 np0005540825 nova_compute[256151]: 2025-12-01 10:28:09.356 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 05:28:09 np0005540825 nova_compute[256151]: 2025-12-01 10:28:09.356 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:28:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:28:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:28:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:28:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:28:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:28:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:28:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:28:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:28:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Dec  1 05:28:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2352800766' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec  1 05:28:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Dec  1 05:28:10 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2712621208' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec  1 05:28:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Dec  1 05:28:10 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4070912794' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec  1 05:28:10 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1128: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:28:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Dec  1 05:28:10 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3035109161' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec  1 05:28:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Dec  1 05:28:10 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3724199474' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec  1 05:28:10 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26546 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:28:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Dec  1 05:28:10 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2548317732' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec  1 05:28:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:28:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:28:10.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:28:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:28:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:28:10.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:28:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Dec  1 05:28:10 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3822245562' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec  1 05:28:10 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26020 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:11 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26564 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec  1 05:28:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/760713574' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec  1 05:28:11 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26570 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:28:11] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Dec  1 05:28:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:28:11] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Dec  1 05:28:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Dec  1 05:28:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/252266620' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec  1 05:28:11 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26032 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:11 np0005540825 systemd[1]: Starting Hostname Service...
Dec  1 05:28:11 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26038 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:11 np0005540825 podman[283315]: 2025-12-01 10:28:11.544747617 +0000 UTC m=+0.088616193 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  1 05:28:11 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26582 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Dec  1 05:28:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2484782800' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec  1 05:28:11 np0005540825 systemd[1]: Started Hostname Service.
Dec  1 05:28:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Dec  1 05:28:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3356989962' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec  1 05:28:11 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26050 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:11 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26597 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:12 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.16797 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:12 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.16803 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:12 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26615 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:12 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Dec  1 05:28:12 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3826764103' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec  1 05:28:12 np0005540825 nova_compute[256151]: 2025-12-01 10:28:12.358 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:28:12 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1129: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:28:12 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.16818 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:12 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26077 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:12 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26624 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:28:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:28:12.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:28:12 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.16827 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:28:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:28:12.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:28:12 np0005540825 nova_compute[256151]: 2025-12-01 10:28:12.858 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:28:13 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26636 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:13 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.16839 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:13 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26089 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:13 np0005540825 nova_compute[256151]: 2025-12-01 10:28:13.253 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:28:13 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Dec  1 05:28:13 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4008874652' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec  1 05:28:13 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec  1 05:28:13 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec  1 05:28:13 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26648 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:13 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.16851 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:13 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26104 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:28:13.729Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:28:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:28:13.730Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:28:13 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Dec  1 05:28:13 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3942174856' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec  1 05:28:13 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec  1 05:28:13 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec  1 05:28:13 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.16869 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:28:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:28:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:28:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:14 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:28:14 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26125 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:14 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.16887 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:14 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1130: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:28:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Dec  1 05:28:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/831062327' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec  1 05:28:14 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26699 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:28:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:28:14.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:28:14 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.16893 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Dec  1 05:28:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1043111903' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec  1 05:28:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:28:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:28:14.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:28:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec  1 05:28:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec  1 05:28:14 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26170 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:15 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.16917 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:28:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Dec  1 05:28:15 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/251848729' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec  1 05:28:16 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.16947 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:16 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1131: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:28:16 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Dec  1 05:28:16 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1550480148' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec  1 05:28:16 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26759 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:28:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:28:16.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:28:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:28:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:28:16.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:28:16 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0)
Dec  1 05:28:16 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2960432479' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec  1 05:28:17 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26218 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:28:17.352Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:28:17 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Dec  1 05:28:17 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/349683807' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec  1 05:28:17 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Dec  1 05:28:17 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3805565623' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec  1 05:28:17 np0005540825 nova_compute[256151]: 2025-12-01 10:28:17.888 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:28:17 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26789 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:18 np0005540825 nova_compute[256151]: 2025-12-01 10:28:18.255 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:28:18 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.16980 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:18 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1132: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:28:18 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26245 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:18 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Dec  1 05:28:18 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/633188691' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec  1 05:28:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:28:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:28:18.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:28:18 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26801 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:28:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:28:18.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:28:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:28:18.870Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:28:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:28:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:28:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:28:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:19 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:28:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Dec  1 05:28:19 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1210079615' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Dec  1 05:28:19 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26807 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:19 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26257 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:19 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.17007 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:19 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26266 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:19 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Dec  1 05:28:19 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1572732039' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Dec  1 05:28:20 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.17022 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:20 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1133: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:28:20 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26834 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:28:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:28:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:28:20.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:28:20 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26275 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:20 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26843 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:20 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:20 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:28:20 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:20 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:28:20 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:20 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:28:20 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:20 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:28:20 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:20 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  1 05:28:20 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:20 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:28:20 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:20 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:28:20 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:20 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:28:20 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:20 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:28:20 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:20 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:28:20 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:20 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:28:20 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:20 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:28:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:28:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:28:20.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:28:21 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump"} v 0)
Dec  1 05:28:21 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1445338784' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Dec  1 05:28:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:28:21] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Dec  1 05:28:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:28:21] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Dec  1 05:28:21 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Dec  1 05:28:21 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3711842620' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Dec  1 05:28:21 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26290 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:21 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.17046 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:21 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26861 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:21 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26296 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:21 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:21 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:28:21 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:21 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:28:21 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:21 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:28:21 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:21 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:28:21 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:21 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  1 05:28:21 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:21 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:28:21 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:21 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:28:21 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:21 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:28:21 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:21 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:28:21 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:21 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:28:21 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:21 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:28:21 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:21 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:28:22 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.17055 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:22 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:22 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:28:22 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:22 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:28:22 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:22 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:28:22 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:22 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:28:22 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:22 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  1 05:28:22 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:22 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:28:22 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:22 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:28:22 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:22 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:28:22 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:22 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:28:22 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:22 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:28:22 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:22 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:28:22 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:22 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:28:22 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26867 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:22 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1134: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:28:22 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Dec  1 05:28:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1369578988' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Dec  1 05:28:22 np0005540825 ovs-appctl[285385]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec  1 05:28:22 np0005540825 ovs-appctl[285389]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec  1 05:28:22 np0005540825 ovs-appctl[285401]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec  1 05:28:22 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Dec  1 05:28:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/284725427' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Dec  1 05:28:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:28:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:28:22.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:28:22 np0005540825 nova_compute[256151]: 2025-12-01 10:28:22.894 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:28:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:28:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:28:22.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:28:22 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat"} v 0)
Dec  1 05:28:22 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1424096683' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Dec  1 05:28:23 np0005540825 nova_compute[256151]: 2025-12-01 10:28:23.256 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:28:23 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26320 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:23 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.17085 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:23 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26326 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:23 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.17094 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:28:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:28:23.730Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:28:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:28:23.731Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:28:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:28:23.731Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:28:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:28:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:28:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:28:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:24 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:28:24 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26903 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Dec  1 05:28:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1086875295' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec  1 05:28:24 np0005540825 podman[286041]: 2025-12-01 10:28:24.275376997 +0000 UTC m=+0.136089414 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  1 05:28:24 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1135: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:28:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:28:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:28:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Dec  1 05:28:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3854596344' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Dec  1 05:28:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:28:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:28:24.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:28:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:28:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:28:24.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:28:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Dec  1 05:28:25 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3187522058' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Dec  1 05:28:25 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.17121 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:25 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26356 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:28:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Dec  1 05:28:25 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1184195527' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec  1 05:28:26 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26948 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:26 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Dec  1 05:28:26 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4048496310' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Dec  1 05:28:26 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1136: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:28:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:28:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:28:26.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:28:26 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Dec  1 05:28:26 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1510056962' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Dec  1 05:28:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:28:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:28:26.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:28:27 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Dec  1 05:28:27 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3039484457' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Dec  1 05:28:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:28:27.353Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:28:27 np0005540825 ceph-osd[82809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  1 05:28:27 np0005540825 ceph-osd[82809]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 13K writes, 49K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 13K writes, 3865 syncs, 3.48 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2619 writes, 9599 keys, 2619 commit groups, 1.0 writes per commit group, ingest: 9.93 MB, 0.02 MB/s#012Interval WAL: 2619 writes, 1062 syncs, 2.47 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  1 05:28:27 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.17154 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:27 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26969 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:27 np0005540825 nova_compute[256151]: 2025-12-01 10:28:27.897 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:28:28 np0005540825 nova_compute[256151]: 2025-12-01 10:28:28.258 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:28:28 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26401 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:28 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1137: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:28:28 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Dec  1 05:28:28 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4139865496' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Dec  1 05:28:28 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26984 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:28:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:28:28.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:28:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:28:28.871Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:28:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:28:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:28:28.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:28:28 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26990 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:28 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Dec  1 05:28:28 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1851788279' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Dec  1 05:28:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:28:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:28:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:28:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:29 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:28:29 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.17175 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  1 05:28:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:28:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  1 05:28:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:28:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 05:28:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:28:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 05:28:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:28:29 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26422 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Dec  1 05:28:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4200155020' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Dec  1 05:28:30 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27014 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:30 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.17196 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:30 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27020 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:28:30 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:28:30 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:30 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:28:30 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:30 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:28:30 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:30 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:28:30 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:30 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:28:30 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:30 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  1 05:28:30 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:30 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:28:30 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:30 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:28:30 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:30 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:28:30 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:30 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:28:30 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:30 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:28:30 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:30 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:28:30 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:30 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:28:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 05:28:30 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:28:30 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1138: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:28:30 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1139: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 0 op/s
Dec  1 05:28:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 05:28:30 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:28:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 05:28:30 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:28:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 05:28:30 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 05:28:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 05:28:30 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:28:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:28:30 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:28:30 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26440 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:30 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.17205 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:30 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:28:30 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:28:30 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:28:30 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:28:30 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:28:30 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:28:30 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:28:30 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:28:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:28:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:28:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:28:30.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:28:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:28:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:28:30.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:28:30 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26446 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Dec  1 05:28:30 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1544160' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Dec  1 05:28:30 np0005540825 podman[287468]: 2025-12-01 10:28:30.979628265 +0000 UTC m=+0.050049900 container create 477924cf8197b8d630365fd8c6e03d249923c97f1ad1c55bd125182d6d4f753b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_merkle, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:28:31 np0005540825 systemd[1]: Started libpod-conmon-477924cf8197b8d630365fd8c6e03d249923c97f1ad1c55bd125182d6d4f753b.scope.
Dec  1 05:28:31 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:28:31 np0005540825 podman[287468]: 2025-12-01 10:28:30.953807499 +0000 UTC m=+0.024229164 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:28:31 np0005540825 podman[287468]: 2025-12-01 10:28:31.062514875 +0000 UTC m=+0.132936540 container init 477924cf8197b8d630365fd8c6e03d249923c97f1ad1c55bd125182d6d4f753b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_merkle, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:28:31 np0005540825 podman[287468]: 2025-12-01 10:28:31.093681063 +0000 UTC m=+0.164102708 container start 477924cf8197b8d630365fd8c6e03d249923c97f1ad1c55bd125182d6d4f753b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:28:31 np0005540825 podman[287468]: 2025-12-01 10:28:31.09659635 +0000 UTC m=+0.167018005 container attach 477924cf8197b8d630365fd8c6e03d249923c97f1ad1c55bd125182d6d4f753b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  1 05:28:31 np0005540825 jovial_merkle[287489]: 167 167
Dec  1 05:28:31 np0005540825 podman[287468]: 2025-12-01 10:28:31.099167908 +0000 UTC m=+0.169589563 container died 477924cf8197b8d630365fd8c6e03d249923c97f1ad1c55bd125182d6d4f753b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec  1 05:28:31 np0005540825 systemd[1]: libpod-477924cf8197b8d630365fd8c6e03d249923c97f1ad1c55bd125182d6d4f753b.scope: Deactivated successfully.
Dec  1 05:28:31 np0005540825 systemd[1]: var-lib-containers-storage-overlay-e62014a52d04879222d1fa0993ee3dbf4988f413ca3447fa916405f29fc2019e-merged.mount: Deactivated successfully.
Dec  1 05:28:31 np0005540825 podman[287468]: 2025-12-01 10:28:31.151443976 +0000 UTC m=+0.221865611 container remove 477924cf8197b8d630365fd8c6e03d249923c97f1ad1c55bd125182d6d4f753b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:28:31 np0005540825 systemd[1]: libpod-conmon-477924cf8197b8d630365fd8c6e03d249923c97f1ad1c55bd125182d6d4f753b.scope: Deactivated successfully.
Dec  1 05:28:31 np0005540825 podman[287544]: 2025-12-01 10:28:31.337144596 +0000 UTC m=+0.053111881 container create bb6b3ab02e4ab812968eda026c525433d86d26b6a6597755eecfecba87299df1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_rosalind, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:28:31 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Dec  1 05:28:31 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1496822608' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Dec  1 05:28:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:28:31] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Dec  1 05:28:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:28:31] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Dec  1 05:28:31 np0005540825 podman[287544]: 2025-12-01 10:28:31.313709074 +0000 UTC m=+0.029676389 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:28:31 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27047 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:31 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.17229 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27053 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:32 np0005540825 systemd[1]: Started libpod-conmon-bb6b3ab02e4ab812968eda026c525433d86d26b6a6597755eecfecba87299df1.scope.
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.17241 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26470 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:28:32 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:28:32 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e72f98b1410eccf4d361592d51ae31f664942f445234798ec4ee763e3b8a2c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:28:32 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e72f98b1410eccf4d361592d51ae31f664942f445234798ec4ee763e3b8a2c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:28:32 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e72f98b1410eccf4d361592d51ae31f664942f445234798ec4ee763e3b8a2c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:28:32 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e72f98b1410eccf4d361592d51ae31f664942f445234798ec4ee763e3b8a2c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:28:32 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e72f98b1410eccf4d361592d51ae31f664942f445234798ec4ee763e3b8a2c6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:28:32 np0005540825 podman[287544]: 2025-12-01 10:28:32.143260818 +0000 UTC m=+0.859228123 container init bb6b3ab02e4ab812968eda026c525433d86d26b6a6597755eecfecba87299df1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  1 05:28:32 np0005540825 podman[287544]: 2025-12-01 10:28:32.152542384 +0000 UTC m=+0.868509669 container start bb6b3ab02e4ab812968eda026c525433d86d26b6a6597755eecfecba87299df1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True)
Dec  1 05:28:32 np0005540825 podman[287544]: 2025-12-01 10:28:32.162822657 +0000 UTC m=+0.878789952 container attach bb6b3ab02e4ab812968eda026c525433d86d26b6a6597755eecfecba87299df1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_rosalind, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1140: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 0 op/s
Dec  1 05:28:32 np0005540825 recursing_rosalind[287613]: --> passed data devices: 0 physical, 1 LVM
Dec  1 05:28:32 np0005540825 recursing_rosalind[287613]: --> All data devices are unavailable
Dec  1 05:28:32 np0005540825 podman[287544]: 2025-12-01 10:28:32.481332393 +0000 UTC m=+1.197299688 container died bb6b3ab02e4ab812968eda026c525433d86d26b6a6597755eecfecba87299df1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_rosalind, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  1 05:28:32 np0005540825 systemd[1]: libpod-bb6b3ab02e4ab812968eda026c525433d86d26b6a6597755eecfecba87299df1.scope: Deactivated successfully.
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26476 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:32 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:28:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0)
Dec  1 05:28:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2673366903' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec  1 05:28:32 np0005540825 systemd[1]: var-lib-containers-storage-overlay-1e72f98b1410eccf4d361592d51ae31f664942f445234798ec4ee763e3b8a2c6-merged.mount: Deactivated successfully.
Dec  1 05:28:32 np0005540825 podman[287544]: 2025-12-01 10:28:32.531863265 +0000 UTC m=+1.247830550 container remove bb6b3ab02e4ab812968eda026c525433d86d26b6a6597755eecfecba87299df1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  1 05:28:32 np0005540825 systemd[1]: libpod-conmon-bb6b3ab02e4ab812968eda026c525433d86d26b6a6597755eecfecba87299df1.scope: Deactivated successfully.
Dec  1 05:28:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:28:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:28:32.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:28:32 np0005540825 nova_compute[256151]: 2025-12-01 10:28:32.900 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:28:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:28:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:28:32.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:28:32 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0)
Dec  1 05:28:32 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2766592132' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Dec  1 05:28:33 np0005540825 podman[287818]: 2025-12-01 10:28:33.155191533 +0000 UTC m=+0.045559300 container create 8b404015babe6f1793998bd8200317b0d1576c30152ceebc3832999afbb90d99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_brown, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:28:33 np0005540825 systemd[1]: Started libpod-conmon-8b404015babe6f1793998bd8200317b0d1576c30152ceebc3832999afbb90d99.scope.
Dec  1 05:28:33 np0005540825 podman[287818]: 2025-12-01 10:28:33.135975863 +0000 UTC m=+0.026343660 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:28:33 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:28:33 np0005540825 podman[287818]: 2025-12-01 10:28:33.257362416 +0000 UTC m=+0.147730213 container init 8b404015babe6f1793998bd8200317b0d1576c30152ceebc3832999afbb90d99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_brown, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:28:33 np0005540825 nova_compute[256151]: 2025-12-01 10:28:33.259 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:28:33 np0005540825 podman[287818]: 2025-12-01 10:28:33.266749625 +0000 UTC m=+0.157117392 container start 8b404015babe6f1793998bd8200317b0d1576c30152ceebc3832999afbb90d99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_brown, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  1 05:28:33 np0005540825 condescending_brown[287836]: 167 167
Dec  1 05:28:33 np0005540825 podman[287818]: 2025-12-01 10:28:33.272081197 +0000 UTC m=+0.162448984 container attach 8b404015babe6f1793998bd8200317b0d1576c30152ceebc3832999afbb90d99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_brown, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  1 05:28:33 np0005540825 systemd[1]: libpod-8b404015babe6f1793998bd8200317b0d1576c30152ceebc3832999afbb90d99.scope: Deactivated successfully.
Dec  1 05:28:33 np0005540825 podman[287818]: 2025-12-01 10:28:33.27333499 +0000 UTC m=+0.163702757 container died 8b404015babe6f1793998bd8200317b0d1576c30152ceebc3832999afbb90d99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_brown, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec  1 05:28:33 np0005540825 systemd[1]: var-lib-containers-storage-overlay-a0022c65b1605c7ea8ece299811dd61f4d25134d150b5455308b499de392d840-merged.mount: Deactivated successfully.
Dec  1 05:28:33 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.17265 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:33 np0005540825 podman[287818]: 2025-12-01 10:28:33.31325646 +0000 UTC m=+0.203624227 container remove 8b404015babe6f1793998bd8200317b0d1576c30152ceebc3832999afbb90d99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec  1 05:28:33 np0005540825 systemd[1]: libpod-conmon-8b404015babe6f1793998bd8200317b0d1576c30152ceebc3832999afbb90d99.scope: Deactivated successfully.
Dec  1 05:28:33 np0005540825 podman[287873]: 2025-12-01 10:28:33.474164682 +0000 UTC m=+0.048681754 container create 70a61f8828a37cdd845baf5effb71af28e7e027aef1706511b7bda2c2a64f678 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:28:33 np0005540825 systemd[1]: Started libpod-conmon-70a61f8828a37cdd845baf5effb71af28e7e027aef1706511b7bda2c2a64f678.scope.
Dec  1 05:28:33 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:28:33 np0005540825 podman[287873]: 2025-12-01 10:28:33.450161314 +0000 UTC m=+0.024678406 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:28:33 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff10379f6efba54d2c1daad7a51c04fe0f970b6681777354688446e540d9ce17/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:28:33 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff10379f6efba54d2c1daad7a51c04fe0f970b6681777354688446e540d9ce17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:28:33 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff10379f6efba54d2c1daad7a51c04fe0f970b6681777354688446e540d9ce17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:28:33 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff10379f6efba54d2c1daad7a51c04fe0f970b6681777354688446e540d9ce17/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:28:33 np0005540825 podman[287873]: 2025-12-01 10:28:33.562044625 +0000 UTC m=+0.136561707 container init 70a61f8828a37cdd845baf5effb71af28e7e027aef1706511b7bda2c2a64f678 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_taussig, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:28:33 np0005540825 podman[287873]: 2025-12-01 10:28:33.570216702 +0000 UTC m=+0.144733784 container start 70a61f8828a37cdd845baf5effb71af28e7e027aef1706511b7bda2c2a64f678 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:28:33 np0005540825 podman[287873]: 2025-12-01 10:28:33.576452567 +0000 UTC m=+0.150969659 container attach 70a61f8828a37cdd845baf5effb71af28e7e027aef1706511b7bda2c2a64f678 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_taussig, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid)
Dec  1 05:28:33 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26500 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:28:33.732Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:28:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:28:33.734Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:28:33 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.17268 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:33 np0005540825 romantic_taussig[287915]: {
Dec  1 05:28:33 np0005540825 romantic_taussig[287915]:    "1": [
Dec  1 05:28:33 np0005540825 romantic_taussig[287915]:        {
Dec  1 05:28:33 np0005540825 romantic_taussig[287915]:            "devices": [
Dec  1 05:28:33 np0005540825 romantic_taussig[287915]:                "/dev/loop3"
Dec  1 05:28:33 np0005540825 romantic_taussig[287915]:            ],
Dec  1 05:28:33 np0005540825 romantic_taussig[287915]:            "lv_name": "ceph_lv0",
Dec  1 05:28:33 np0005540825 romantic_taussig[287915]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:28:33 np0005540825 romantic_taussig[287915]:            "lv_size": "21470642176",
Dec  1 05:28:33 np0005540825 romantic_taussig[287915]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 05:28:33 np0005540825 romantic_taussig[287915]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:28:33 np0005540825 romantic_taussig[287915]:            "name": "ceph_lv0",
Dec  1 05:28:33 np0005540825 romantic_taussig[287915]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:28:33 np0005540825 romantic_taussig[287915]:            "tags": {
Dec  1 05:28:33 np0005540825 romantic_taussig[287915]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:28:33 np0005540825 romantic_taussig[287915]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:28:33 np0005540825 romantic_taussig[287915]:                "ceph.cephx_lockbox_secret": "",
Dec  1 05:28:33 np0005540825 romantic_taussig[287915]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 05:28:33 np0005540825 romantic_taussig[287915]:                "ceph.cluster_name": "ceph",
Dec  1 05:28:33 np0005540825 romantic_taussig[287915]:                "ceph.crush_device_class": "",
Dec  1 05:28:33 np0005540825 romantic_taussig[287915]:                "ceph.encrypted": "0",
Dec  1 05:28:33 np0005540825 romantic_taussig[287915]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 05:28:33 np0005540825 romantic_taussig[287915]:                "ceph.osd_id": "1",
Dec  1 05:28:33 np0005540825 romantic_taussig[287915]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 05:28:33 np0005540825 romantic_taussig[287915]:                "ceph.type": "block",
Dec  1 05:28:33 np0005540825 romantic_taussig[287915]:                "ceph.vdo": "0",
Dec  1 05:28:33 np0005540825 romantic_taussig[287915]:                "ceph.with_tpm": "0"
Dec  1 05:28:33 np0005540825 romantic_taussig[287915]:            },
Dec  1 05:28:33 np0005540825 romantic_taussig[287915]:            "type": "block",
Dec  1 05:28:33 np0005540825 romantic_taussig[287915]:            "vg_name": "ceph_vg0"
Dec  1 05:28:33 np0005540825 romantic_taussig[287915]:        }
Dec  1 05:28:33 np0005540825 romantic_taussig[287915]:    ]
Dec  1 05:28:33 np0005540825 romantic_taussig[287915]: }
Dec  1 05:28:33 np0005540825 systemd[1]: libpod-70a61f8828a37cdd845baf5effb71af28e7e027aef1706511b7bda2c2a64f678.scope: Deactivated successfully.
Dec  1 05:28:33 np0005540825 podman[287873]: 2025-12-01 10:28:33.907261059 +0000 UTC m=+0.481778141 container died 70a61f8828a37cdd845baf5effb71af28e7e027aef1706511b7bda2c2a64f678 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_taussig, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:28:33 np0005540825 systemd[1]: var-lib-containers-storage-overlay-ff10379f6efba54d2c1daad7a51c04fe0f970b6681777354688446e540d9ce17-merged.mount: Deactivated successfully.
Dec  1 05:28:33 np0005540825 podman[287873]: 2025-12-01 10:28:33.955405957 +0000 UTC m=+0.529923029 container remove 70a61f8828a37cdd845baf5effb71af28e7e027aef1706511b7bda2c2a64f678 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  1 05:28:33 np0005540825 systemd[1]: libpod-conmon-70a61f8828a37cdd845baf5effb71af28e7e027aef1706511b7bda2c2a64f678.scope: Deactivated successfully.
Dec  1 05:28:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:28:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:34 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:28:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:34 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:28:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:34 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:28:34 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26509 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:28:34 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec  1 05:28:34 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4227227138' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  1 05:28:34 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1141: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 0 op/s
Dec  1 05:28:34 np0005540825 podman[288298]: 2025-12-01 10:28:34.539026181 +0000 UTC m=+0.030781268 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:28:34 np0005540825 virtqemud[255660]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec  1 05:28:34 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Dec  1 05:28:34 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/211240088' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Dec  1 05:28:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:28:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:28:34.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:28:34 np0005540825 podman[288298]: 2025-12-01 10:28:34.863827415 +0000 UTC m=+0.355582472 container create 0338ceb35c24e5df6b02522ce7f29c28c657ff327baecdc4667b949babacd9ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_hoover, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  1 05:28:34 np0005540825 systemd[1]: Started libpod-conmon-0338ceb35c24e5df6b02522ce7f29c28c657ff327baecdc4667b949babacd9ed.scope.
Dec  1 05:28:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:28:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:28:34.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:28:34 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:28:34 np0005540825 podman[288298]: 2025-12-01 10:28:34.953542376 +0000 UTC m=+0.445297463 container init 0338ceb35c24e5df6b02522ce7f29c28c657ff327baecdc4667b949babacd9ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_hoover, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  1 05:28:34 np0005540825 podman[288298]: 2025-12-01 10:28:34.964168318 +0000 UTC m=+0.455923395 container start 0338ceb35c24e5df6b02522ce7f29c28c657ff327baecdc4667b949babacd9ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  1 05:28:34 np0005540825 podman[288298]: 2025-12-01 10:28:34.968425442 +0000 UTC m=+0.460180519 container attach 0338ceb35c24e5df6b02522ce7f29c28c657ff327baecdc4667b949babacd9ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:28:34 np0005540825 nervous_hoover[288369]: 167 167
Dec  1 05:28:34 np0005540825 conmon[288369]: conmon 0338ceb35c24e5df6b02 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0338ceb35c24e5df6b02522ce7f29c28c657ff327baecdc4667b949babacd9ed.scope/container/memory.events
Dec  1 05:28:34 np0005540825 systemd[1]: libpod-0338ceb35c24e5df6b02522ce7f29c28c657ff327baecdc4667b949babacd9ed.scope: Deactivated successfully.
Dec  1 05:28:34 np0005540825 podman[288298]: 2025-12-01 10:28:34.971900944 +0000 UTC m=+0.463656001 container died 0338ceb35c24e5df6b02522ce7f29c28c657ff327baecdc4667b949babacd9ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:28:34 np0005540825 systemd[1]: var-lib-containers-storage-overlay-d8349a024fb5ad69c27848597a633fc8294f12444d55f0294ad438206cd6b0c3-merged.mount: Deactivated successfully.
Dec  1 05:28:35 np0005540825 podman[288298]: 2025-12-01 10:28:35.003678907 +0000 UTC m=+0.495433954 container remove 0338ceb35c24e5df6b02522ce7f29c28c657ff327baecdc4667b949babacd9ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_hoover, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  1 05:28:35 np0005540825 systemd[1]: libpod-conmon-0338ceb35c24e5df6b02522ce7f29c28c657ff327baecdc4667b949babacd9ed.scope: Deactivated successfully.
Dec  1 05:28:35 np0005540825 podman[288425]: 2025-12-01 10:28:35.188188186 +0000 UTC m=+0.055970007 container create e40348de4f376aa04fa906f06c9755aa1657152cc59d6220084c191f84754127 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  1 05:28:35 np0005540825 systemd[1]: Started libpod-conmon-e40348de4f376aa04fa906f06c9755aa1657152cc59d6220084c191f84754127.scope.
Dec  1 05:28:35 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:28:35 np0005540825 podman[288425]: 2025-12-01 10:28:35.167154388 +0000 UTC m=+0.034936139 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:28:35 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75b8ac8708a795ad740e63afde1cbcbaf981329d9bcee214e317e1fed0fca923/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:28:35 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75b8ac8708a795ad740e63afde1cbcbaf981329d9bcee214e317e1fed0fca923/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:28:35 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75b8ac8708a795ad740e63afde1cbcbaf981329d9bcee214e317e1fed0fca923/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:28:35 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75b8ac8708a795ad740e63afde1cbcbaf981329d9bcee214e317e1fed0fca923/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:28:35 np0005540825 podman[288425]: 2025-12-01 10:28:35.280429785 +0000 UTC m=+0.148211526 container init e40348de4f376aa04fa906f06c9755aa1657152cc59d6220084c191f84754127 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:28:35 np0005540825 podman[288425]: 2025-12-01 10:28:35.287968495 +0000 UTC m=+0.155750226 container start e40348de4f376aa04fa906f06c9755aa1657152cc59d6220084c191f84754127 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_thompson, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:28:35 np0005540825 podman[288425]: 2025-12-01 10:28:35.299367988 +0000 UTC m=+0.167149729 container attach e40348de4f376aa04fa906f06c9755aa1657152cc59d6220084c191f84754127 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_thompson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 05:28:35 np0005540825 systemd[1]: Starting Time & Date Service...
Dec  1 05:28:35 np0005540825 systemd[1]: Started Time & Date Service.
Dec  1 05:28:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:28:35 np0005540825 lvm[288584]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 05:28:35 np0005540825 lvm[288584]: VG ceph_vg0 finished
Dec  1 05:28:35 np0005540825 gifted_thompson[288447]: {}
Dec  1 05:28:36 np0005540825 systemd[1]: libpod-e40348de4f376aa04fa906f06c9755aa1657152cc59d6220084c191f84754127.scope: Deactivated successfully.
Dec  1 05:28:36 np0005540825 podman[288425]: 2025-12-01 10:28:36.018058248 +0000 UTC m=+0.885839989 container died e40348de4f376aa04fa906f06c9755aa1657152cc59d6220084c191f84754127 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  1 05:28:36 np0005540825 systemd[1]: libpod-e40348de4f376aa04fa906f06c9755aa1657152cc59d6220084c191f84754127.scope: Consumed 1.079s CPU time.
Dec  1 05:28:36 np0005540825 systemd[1]: var-lib-containers-storage-overlay-75b8ac8708a795ad740e63afde1cbcbaf981329d9bcee214e317e1fed0fca923-merged.mount: Deactivated successfully.
Dec  1 05:28:36 np0005540825 podman[288425]: 2025-12-01 10:28:36.066669539 +0000 UTC m=+0.934451260 container remove e40348de4f376aa04fa906f06c9755aa1657152cc59d6220084c191f84754127 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_thompson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  1 05:28:36 np0005540825 systemd[1]: libpod-conmon-e40348de4f376aa04fa906f06c9755aa1657152cc59d6220084c191f84754127.scope: Deactivated successfully.
Dec  1 05:28:36 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 05:28:36 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:28:36 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 05:28:36 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:28:36 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1142: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 0 op/s
Dec  1 05:28:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:28:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:28:36.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:28:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:28:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:28:36.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:28:37 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:28:37 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:28:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:28:37.356Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:28:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:28:37.356Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:28:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:28:37.357Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:28:37 np0005540825 nova_compute[256151]: 2025-12-01 10:28:37.932 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:28:38 np0005540825 podman[288658]: 2025-12-01 10:28:38.190099762 +0000 UTC m=+0.059625194 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:28:38 np0005540825 nova_compute[256151]: 2025-12-01 10:28:38.261 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:28:38 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1143: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 0 op/s
Dec  1 05:28:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:28:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:28:38.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:28:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:28:38.872Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:28:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:28:38.872Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:28:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:28:38.872Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:28:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:28:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:28:38.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:28:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:28:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:39 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:28:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:39 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:28:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:39 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:28:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_10:28:39
Dec  1 05:28:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 05:28:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 05:28:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'backups', 'volumes', 'cephfs.cephfs.meta', '.mgr', '.nfs', 'images', '.rgw.root', 'vms', 'default.rgw.control', 'default.rgw.meta']
Dec  1 05:28:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 05:28:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:28:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:28:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:28:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:28:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:28:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:28:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:28:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:28:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 05:28:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:28:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:28:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:28:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:28:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  1 05:28:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:28:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:28:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:28:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:28:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:28:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:28:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:28:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:28:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 05:28:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:28:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 05:28:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:28:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:28:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:28:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:28:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:28:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:28:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:28:40 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1144: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 0 op/s
Dec  1 05:28:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:28:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:28:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:28:40.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:28:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:28:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:28:40.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:28:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:28:41] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec  1 05:28:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:28:41] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec  1 05:28:42 np0005540825 podman[288684]: 2025-12-01 10:28:42.207005156 +0000 UTC m=+0.068985703 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:28:42 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1145: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:28:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:28:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:28:42.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:28:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:28:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:28:42.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:28:42 np0005540825 nova_compute[256151]: 2025-12-01 10:28:42.984 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:28:43 np0005540825 nova_compute[256151]: 2025-12-01 10:28:43.262 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:28:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:28:43.735Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:28:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:28:43.736Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:28:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:28:43.736Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:28:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:28:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:44 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:28:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:44 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:28:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:44 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:28:44 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1146: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:28:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:28:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:28:44.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:28:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:28:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:28:44.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:28:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:28:46 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1147: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:28:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:28:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:28:46.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:28:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:28:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:28:46.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:28:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:28:47.358Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:28:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:28:47.358Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:28:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:28:47.359Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:28:47 np0005540825 nova_compute[256151]: 2025-12-01 10:28:47.986 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:28:48 np0005540825 nova_compute[256151]: 2025-12-01 10:28:48.264 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:28:48 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1148: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:28:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:28:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:28:48.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:28:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:28:48.874Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:28:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:28:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:28:48.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:28:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:28:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:28:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:28:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:49 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:28:50 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1149: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:28:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:28:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:28:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:28:50.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:28:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:28:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:28:50.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:28:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:28:51] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec  1 05:28:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:28:51] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec  1 05:28:52 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1150: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:28:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:28:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:28:52.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:28:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:28:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:28:52.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:28:52 np0005540825 nova_compute[256151]: 2025-12-01 10:28:52.995 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:28:53 np0005540825 nova_compute[256151]: 2025-12-01 10:28:53.266 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:28:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:28:53.737Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:28:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:28:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:28:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:28:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:54 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:28:54 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1151: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:28:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:28:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:28:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:28:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:28:54.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:28:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:28:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:28:54.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:28:55 np0005540825 podman[288745]: 2025-12-01 10:28:55.234543417 +0000 UTC m=+0.093469452 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:28:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:28:56 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1152: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:28:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:28:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:28:56.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:28:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:28:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:28:56.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:28:57 np0005540825 nova_compute[256151]: 2025-12-01 10:28:57.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:28:57 np0005540825 nova_compute[256151]: 2025-12-01 10:28:57.028 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 05:28:57 np0005540825 nova_compute[256151]: 2025-12-01 10:28:57.028 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 05:28:57 np0005540825 nova_compute[256151]: 2025-12-01 10:28:57.057 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 05:28:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:28:57.359Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:28:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:28:57.360Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:28:58 np0005540825 nova_compute[256151]: 2025-12-01 10:28:57.997 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:28:58 np0005540825 nova_compute[256151]: 2025-12-01 10:28:58.269 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:28:58 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1153: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:28:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:28:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:28:58.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:28:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:28:58.875Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:28:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:28:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:28:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:28:58.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:28:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:28:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:28:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:28:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:28:59 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:29:00 np0005540825 nova_compute[256151]: 2025-12-01 10:29:00.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:29:00 np0005540825 nova_compute[256151]: 2025-12-01 10:29:00.028 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:29:00 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1154: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:29:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:29:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:29:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:29:00.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:29:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:29:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:29:00.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:29:01 np0005540825 nova_compute[256151]: 2025-12-01 10:29:01.026 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:29:01 np0005540825 nova_compute[256151]: 2025-12-01 10:29:01.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:29:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:29:01] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Dec  1 05:29:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:29:01] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Dec  1 05:29:02 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1155: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:29:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:29:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:29:02.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:29:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:29:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:29:02.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:29:03 np0005540825 nova_compute[256151]: 2025-12-01 10:29:02.999 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:29:03 np0005540825 nova_compute[256151]: 2025-12-01 10:29:03.271 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:29:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:29:03.738Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:29:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:29:03.738Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:29:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:29:03.739Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:29:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:29:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:29:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:29:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:04 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:29:04 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1156: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:29:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:29:04.586 163291 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:29:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:29:04.587 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:29:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:29:04.587 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:29:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:29:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:29:04.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:29:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:29:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:29:04.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:29:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:29:05 np0005540825 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  1 05:29:05 np0005540825 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  1 05:29:06 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1157: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:29:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:29:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:29:06.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:29:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:29:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:29:06.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:29:07 np0005540825 nova_compute[256151]: 2025-12-01 10:29:07.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:29:07 np0005540825 nova_compute[256151]: 2025-12-01 10:29:07.028 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 05:29:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:29:07.361Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:29:08 np0005540825 nova_compute[256151]: 2025-12-01 10:29:08.001 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:29:08 np0005540825 nova_compute[256151]: 2025-12-01 10:29:08.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:29:08 np0005540825 nova_compute[256151]: 2025-12-01 10:29:08.052 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:29:08 np0005540825 nova_compute[256151]: 2025-12-01 10:29:08.053 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:29:08 np0005540825 nova_compute[256151]: 2025-12-01 10:29:08.053 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:29:08 np0005540825 nova_compute[256151]: 2025-12-01 10:29:08.053 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 05:29:08 np0005540825 nova_compute[256151]: 2025-12-01 10:29:08.053 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:29:08 np0005540825 nova_compute[256151]: 2025-12-01 10:29:08.273 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:29:08 np0005540825 podman[288809]: 2025-12-01 10:29:08.287820334 +0000 UTC m=+0.065369396 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  1 05:29:08 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1158: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:29:08 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:29:08 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/697846383' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:29:08 np0005540825 nova_compute[256151]: 2025-12-01 10:29:08.566 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:29:08 np0005540825 nova_compute[256151]: 2025-12-01 10:29:08.762 256155 WARNING nova.virt.libvirt.driver [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 05:29:08 np0005540825 nova_compute[256151]: 2025-12-01 10:29:08.763 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4366MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 05:29:08 np0005540825 nova_compute[256151]: 2025-12-01 10:29:08.764 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:29:08 np0005540825 nova_compute[256151]: 2025-12-01 10:29:08.764 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:29:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:29:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:29:08.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:29:08 np0005540825 nova_compute[256151]: 2025-12-01 10:29:08.828 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 05:29:08 np0005540825 nova_compute[256151]: 2025-12-01 10:29:08.828 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 05:29:08 np0005540825 nova_compute[256151]: 2025-12-01 10:29:08.845 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:29:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:29:08.876Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:29:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:29:08.876Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:29:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:29:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:29:08.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:29:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:29:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:29:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:29:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:09 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:29:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:29:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4009921699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:29:09 np0005540825 nova_compute[256151]: 2025-12-01 10:29:09.307 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:29:09 np0005540825 nova_compute[256151]: 2025-12-01 10:29:09.313 256155 DEBUG nova.compute.provider_tree [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed in ProviderTree for provider: 5efe20fe-1981-4bd9-8786-d9fddc89a5ae update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 05:29:09 np0005540825 nova_compute[256151]: 2025-12-01 10:29:09.453 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 05:29:09 np0005540825 nova_compute[256151]: 2025-12-01 10:29:09.455 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 05:29:09 np0005540825 nova_compute[256151]: 2025-12-01 10:29:09.456 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:29:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:29:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:29:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:29:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:29:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:29:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:29:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:29:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:29:10 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1159: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:29:10 np0005540825 nova_compute[256151]: 2025-12-01 10:29:10.456 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:29:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:29:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:29:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:29:10.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:29:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:29:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:29:10.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:29:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:29:11] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Dec  1 05:29:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:29:11] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Dec  1 05:29:12 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1160: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:29:12 np0005540825 podman[288882]: 2025-12-01 10:29:12.663274806 +0000 UTC m=+0.075128875 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:29:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:29:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:29:12.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:29:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:29:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:29:12.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:29:13 np0005540825 nova_compute[256151]: 2025-12-01 10:29:13.002 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:29:13 np0005540825 nova_compute[256151]: 2025-12-01 10:29:13.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:29:13 np0005540825 nova_compute[256151]: 2025-12-01 10:29:13.275 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:29:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:29:13.740Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:29:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:29:13.744Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:29:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:29:13.745Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:29:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:29:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:14 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:29:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:14 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:29:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:14 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:29:14 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1161: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:29:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:29:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:29:14.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:29:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:29:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:29:14.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:29:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:29:16 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1162: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:29:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:29:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:29:16.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:29:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:29:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:29:16.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:29:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:29:17.362Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:29:18 np0005540825 nova_compute[256151]: 2025-12-01 10:29:18.004 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:29:18 np0005540825 nova_compute[256151]: 2025-12-01 10:29:18.278 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:29:18 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1163: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:29:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:29:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:29:18.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:29:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:29:18.877Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:29:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:29:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:29:18.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:29:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:29:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:29:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:19 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:29:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:19 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:29:20 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1164: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:29:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:29:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:29:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:29:20.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:29:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:29:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:29:20.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:29:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:29:21] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Dec  1 05:29:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:29:21] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Dec  1 05:29:22 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1165: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:29:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:29:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:29:22.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:29:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:29:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:29:22.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:29:23 np0005540825 nova_compute[256151]: 2025-12-01 10:29:23.006 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:29:23 np0005540825 nova_compute[256151]: 2025-12-01 10:29:23.280 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:29:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:29:23.746Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:29:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:29:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:29:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:29:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:24 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:29:24 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1166: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:29:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:29:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:29:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:29:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:29:24.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:29:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:29:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:29:24.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:29:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:29:26 np0005540825 podman[288915]: 2025-12-01 10:29:26.172009192 +0000 UTC m=+0.123433438 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_controller)
Dec  1 05:29:26 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1167: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:29:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:29:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:29:26.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:29:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:29:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:29:26.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:29:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:29:27.363Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:29:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:29:27.363Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:29:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:29:27.364Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:29:27 np0005540825 systemd[1]: session-56.scope: Deactivated successfully.
Dec  1 05:29:27 np0005540825 systemd[1]: session-56.scope: Consumed 2min 57.201s CPU time, 866.0M memory peak, read 353.9M from disk, written 147.0M to disk.
Dec  1 05:29:27 np0005540825 systemd-logind[789]: Session 56 logged out. Waiting for processes to exit.
Dec  1 05:29:27 np0005540825 systemd-logind[789]: Removed session 56.
Dec  1 05:29:28 np0005540825 nova_compute[256151]: 2025-12-01 10:29:28.009 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:29:28 np0005540825 systemd-logind[789]: New session 57 of user zuul.
Dec  1 05:29:28 np0005540825 systemd[1]: Started Session 57 of User zuul.
Dec  1 05:29:28 np0005540825 nova_compute[256151]: 2025-12-01 10:29:28.282 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:29:28 np0005540825 systemd[1]: session-57.scope: Deactivated successfully.
Dec  1 05:29:28 np0005540825 systemd-logind[789]: Session 57 logged out. Waiting for processes to exit.
Dec  1 05:29:28 np0005540825 systemd-logind[789]: Removed session 57.
Dec  1 05:29:28 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1168: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:29:28 np0005540825 systemd-logind[789]: New session 58 of user zuul.
Dec  1 05:29:28 np0005540825 systemd[1]: Started Session 58 of User zuul.
Dec  1 05:29:28 np0005540825 systemd[1]: session-58.scope: Deactivated successfully.
Dec  1 05:29:28 np0005540825 systemd-logind[789]: Session 58 logged out. Waiting for processes to exit.
Dec  1 05:29:28 np0005540825 systemd-logind[789]: Removed session 58.
Dec  1 05:29:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:29:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:29:28.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:29:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:29:28.879Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:29:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:29:28.881Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:29:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:29:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:29:28.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:29:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:29:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:29:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:29:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:29 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:29:30 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1169: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:29:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:29:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:29:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:29:30.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:29:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:29:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:29:30.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:29:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:29:31] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Dec  1 05:29:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:29:31] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Dec  1 05:29:32 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1170: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:29:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:29:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:29:32.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:29:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:29:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:29:32.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:29:33 np0005540825 nova_compute[256151]: 2025-12-01 10:29:33.011 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:29:33 np0005540825 nova_compute[256151]: 2025-12-01 10:29:33.284 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:29:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:29:33.747Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:29:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:29:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:29:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:29:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:34 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:29:34 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1171: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:29:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:29:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:29:34.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:29:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:29:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:29:34.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:29:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:29:36 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1172: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:29:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:29:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:29:36.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:29:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:29:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:29:36.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:29:37 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  1 05:29:37 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:29:37 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  1 05:29:37 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:29:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:29:37.365Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:29:37 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:29:37 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:29:37 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 05:29:37 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:29:37 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1173: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 546 B/s rd, 0 op/s
Dec  1 05:29:37 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 05:29:37 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:29:37 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 05:29:37 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:29:37 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 05:29:37 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 05:29:37 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 05:29:37 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:29:37 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:29:37 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:29:38 np0005540825 nova_compute[256151]: 2025-12-01 10:29:38.011 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:29:38 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:29:38 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:29:38 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:29:38 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:29:38 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:29:38 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:29:38 np0005540825 nova_compute[256151]: 2025-12-01 10:29:38.285 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:29:38 np0005540825 podman[289205]: 2025-12-01 10:29:38.364097033 +0000 UTC m=+0.065228663 container create d0e88e98e166ea9e260e907315c1b1b6c805cbc435420278415fe1f839c4595b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_curie, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec  1 05:29:38 np0005540825 podman[289205]: 2025-12-01 10:29:38.329485584 +0000 UTC m=+0.030617304 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:29:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:29:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:29:38.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:29:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:29:38.881Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:29:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:29:38.882Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:29:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:29:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:29:38.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:29:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:29:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:29:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:29:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:39 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:29:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_10:29:39
Dec  1 05:29:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 05:29:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 05:29:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['images', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', '.nfs', 'vms', 'default.rgw.meta', '.mgr', 'volumes', 'default.rgw.control', 'backups']
Dec  1 05:29:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 05:29:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:29:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:29:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:29:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:29:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:29:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:29:39 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1174: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 546 B/s rd, 0 op/s
Dec  1 05:29:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:29:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:29:39 np0005540825 systemd[1]: Started libpod-conmon-d0e88e98e166ea9e260e907315c1b1b6c805cbc435420278415fe1f839c4595b.scope.
Dec  1 05:29:40 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:29:40 np0005540825 podman[289205]: 2025-12-01 10:29:40.02397027 +0000 UTC m=+1.725101970 container init d0e88e98e166ea9e260e907315c1b1b6c805cbc435420278415fe1f839c4595b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_curie, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:29:40 np0005540825 podman[289205]: 2025-12-01 10:29:40.035513347 +0000 UTC m=+1.736644977 container start d0e88e98e166ea9e260e907315c1b1b6c805cbc435420278415fe1f839c4595b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:29:40 np0005540825 podman[289205]: 2025-12-01 10:29:40.039232505 +0000 UTC m=+1.740364235 container attach d0e88e98e166ea9e260e907315c1b1b6c805cbc435420278415fe1f839c4595b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_curie, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325)
Dec  1 05:29:40 np0005540825 frosty_curie[289234]: 167 167
Dec  1 05:29:40 np0005540825 systemd[1]: libpod-d0e88e98e166ea9e260e907315c1b1b6c805cbc435420278415fe1f839c4595b.scope: Deactivated successfully.
Dec  1 05:29:40 np0005540825 podman[289205]: 2025-12-01 10:29:40.042821921 +0000 UTC m=+1.743953541 container died d0e88e98e166ea9e260e907315c1b1b6c805cbc435420278415fe1f839c4595b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_curie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:29:40 np0005540825 podman[289219]: 2025-12-01 10:29:40.055917738 +0000 UTC m=+1.644410568 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  1 05:29:40 np0005540825 systemd[1]: var-lib-containers-storage-overlay-9b5e90ec74f464087398e7a14030c3ee6e2d8400266ead3d12e0667128fb4105-merged.mount: Deactivated successfully.
Dec  1 05:29:40 np0005540825 podman[289205]: 2025-12-01 10:29:40.137411022 +0000 UTC m=+1.838542652 container remove d0e88e98e166ea9e260e907315c1b1b6c805cbc435420278415fe1f839c4595b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_curie, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:29:40 np0005540825 systemd[1]: libpod-conmon-d0e88e98e166ea9e260e907315c1b1b6c805cbc435420278415fe1f839c4595b.scope: Deactivated successfully.
Dec  1 05:29:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 05:29:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:29:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:29:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:29:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:29:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:29:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:29:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:29:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:29:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:29:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  1 05:29:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:29:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:29:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:29:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:29:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:29:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:29:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:29:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:29:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:29:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:29:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:29:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:29:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:29:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:29:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 05:29:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:29:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 05:29:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:29:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:29:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:29:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:29:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:29:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:29:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:29:40 np0005540825 podman[289265]: 2025-12-01 10:29:40.306176972 +0000 UTC m=+0.041302937 container create 6ffc7cebed30066ada737a1588fad0bd0fbdca2faa8e5b8ddadfeefa73e43e03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_brown, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:29:40 np0005540825 systemd[1]: Started libpod-conmon-6ffc7cebed30066ada737a1588fad0bd0fbdca2faa8e5b8ddadfeefa73e43e03.scope.
Dec  1 05:29:40 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:29:40 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9756433e13fd593928e681f0a47610a54ad5c0c29b83601cfea5ab837dc8f344/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:29:40 np0005540825 podman[289265]: 2025-12-01 10:29:40.28915705 +0000 UTC m=+0.024283045 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:29:40 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9756433e13fd593928e681f0a47610a54ad5c0c29b83601cfea5ab837dc8f344/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:29:40 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9756433e13fd593928e681f0a47610a54ad5c0c29b83601cfea5ab837dc8f344/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:29:40 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9756433e13fd593928e681f0a47610a54ad5c0c29b83601cfea5ab837dc8f344/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:29:40 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9756433e13fd593928e681f0a47610a54ad5c0c29b83601cfea5ab837dc8f344/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:29:40 np0005540825 podman[289265]: 2025-12-01 10:29:40.40551572 +0000 UTC m=+0.140641705 container init 6ffc7cebed30066ada737a1588fad0bd0fbdca2faa8e5b8ddadfeefa73e43e03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:29:40 np0005540825 podman[289265]: 2025-12-01 10:29:40.415176256 +0000 UTC m=+0.150302221 container start 6ffc7cebed30066ada737a1588fad0bd0fbdca2faa8e5b8ddadfeefa73e43e03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_brown, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  1 05:29:40 np0005540825 podman[289265]: 2025-12-01 10:29:40.418166846 +0000 UTC m=+0.153292811 container attach 6ffc7cebed30066ada737a1588fad0bd0fbdca2faa8e5b8ddadfeefa73e43e03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS)
Dec  1 05:29:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:29:40 np0005540825 lucid_brown[289282]: --> passed data devices: 0 physical, 1 LVM
Dec  1 05:29:40 np0005540825 lucid_brown[289282]: --> All data devices are unavailable
Dec  1 05:29:40 np0005540825 systemd[1]: libpod-6ffc7cebed30066ada737a1588fad0bd0fbdca2faa8e5b8ddadfeefa73e43e03.scope: Deactivated successfully.
Dec  1 05:29:40 np0005540825 podman[289297]: 2025-12-01 10:29:40.830209445 +0000 UTC m=+0.030210673 container died 6ffc7cebed30066ada737a1588fad0bd0fbdca2faa8e5b8ddadfeefa73e43e03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_brown, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:29:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:29:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:29:40.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:29:40 np0005540825 systemd[1]: var-lib-containers-storage-overlay-9756433e13fd593928e681f0a47610a54ad5c0c29b83601cfea5ab837dc8f344-merged.mount: Deactivated successfully.
Dec  1 05:29:40 np0005540825 podman[289297]: 2025-12-01 10:29:40.873948226 +0000 UTC m=+0.073949444 container remove 6ffc7cebed30066ada737a1588fad0bd0fbdca2faa8e5b8ddadfeefa73e43e03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_brown, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True)
Dec  1 05:29:40 np0005540825 systemd[1]: libpod-conmon-6ffc7cebed30066ada737a1588fad0bd0fbdca2faa8e5b8ddadfeefa73e43e03.scope: Deactivated successfully.
Dec  1 05:29:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:29:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:29:40.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:29:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:29:41] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Dec  1 05:29:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:29:41] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Dec  1 05:29:41 np0005540825 podman[289404]: 2025-12-01 10:29:41.500817419 +0000 UTC m=+0.050513792 container create 67e063476f2c28f739fc267b8c71acad2910f783461aa5196ca3bac7803e4efa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_curran, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  1 05:29:41 np0005540825 systemd[1]: Started libpod-conmon-67e063476f2c28f739fc267b8c71acad2910f783461aa5196ca3bac7803e4efa.scope.
Dec  1 05:29:41 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:29:41 np0005540825 podman[289404]: 2025-12-01 10:29:41.47752248 +0000 UTC m=+0.027218943 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:29:41 np0005540825 podman[289404]: 2025-12-01 10:29:41.576767165 +0000 UTC m=+0.126463558 container init 67e063476f2c28f739fc267b8c71acad2910f783461aa5196ca3bac7803e4efa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_curran, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  1 05:29:41 np0005540825 podman[289404]: 2025-12-01 10:29:41.584025568 +0000 UTC m=+0.133721951 container start 67e063476f2c28f739fc267b8c71acad2910f783461aa5196ca3bac7803e4efa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_curran, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:29:41 np0005540825 podman[289404]: 2025-12-01 10:29:41.588369773 +0000 UTC m=+0.138066176 container attach 67e063476f2c28f739fc267b8c71acad2910f783461aa5196ca3bac7803e4efa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_curran, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec  1 05:29:41 np0005540825 vigilant_curran[289420]: 167 167
Dec  1 05:29:41 np0005540825 systemd[1]: libpod-67e063476f2c28f739fc267b8c71acad2910f783461aa5196ca3bac7803e4efa.scope: Deactivated successfully.
Dec  1 05:29:41 np0005540825 podman[289404]: 2025-12-01 10:29:41.591487796 +0000 UTC m=+0.141184209 container died 67e063476f2c28f739fc267b8c71acad2910f783461aa5196ca3bac7803e4efa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec  1 05:29:41 np0005540825 systemd[1]: var-lib-containers-storage-overlay-b67c89c461d77b45df20be939e9b93db32a51d15fc16ce7ec67d6c025398a7ba-merged.mount: Deactivated successfully.
Dec  1 05:29:41 np0005540825 podman[289404]: 2025-12-01 10:29:41.632562886 +0000 UTC m=+0.182259259 container remove 67e063476f2c28f739fc267b8c71acad2910f783461aa5196ca3bac7803e4efa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:29:41 np0005540825 systemd[1]: libpod-conmon-67e063476f2c28f739fc267b8c71acad2910f783461aa5196ca3bac7803e4efa.scope: Deactivated successfully.
Dec  1 05:29:41 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1175: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 819 B/s rd, 0 op/s
Dec  1 05:29:41 np0005540825 podman[289446]: 2025-12-01 10:29:41.810739536 +0000 UTC m=+0.047995455 container create 84a354785c33d26db34c80cf287e427c4d76d3834775ddb9671d08ff942540c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Dec  1 05:29:41 np0005540825 systemd[1]: Started libpod-conmon-84a354785c33d26db34c80cf287e427c4d76d3834775ddb9671d08ff942540c7.scope.
Dec  1 05:29:41 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:29:41 np0005540825 podman[289446]: 2025-12-01 10:29:41.792391939 +0000 UTC m=+0.029647898 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:29:41 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9e61e12ecb03f0435ff3a1e944273a87d45c45fcff64802783f8095a0a7a067/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:29:41 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9e61e12ecb03f0435ff3a1e944273a87d45c45fcff64802783f8095a0a7a067/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:29:41 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9e61e12ecb03f0435ff3a1e944273a87d45c45fcff64802783f8095a0a7a067/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:29:41 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9e61e12ecb03f0435ff3a1e944273a87d45c45fcff64802783f8095a0a7a067/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:29:41 np0005540825 podman[289446]: 2025-12-01 10:29:41.905576234 +0000 UTC m=+0.142832183 container init 84a354785c33d26db34c80cf287e427c4d76d3834775ddb9671d08ff942540c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_keller, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:29:41 np0005540825 podman[289446]: 2025-12-01 10:29:41.913530395 +0000 UTC m=+0.150786324 container start 84a354785c33d26db34c80cf287e427c4d76d3834775ddb9671d08ff942540c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_keller, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:29:41 np0005540825 podman[289446]: 2025-12-01 10:29:41.917293525 +0000 UTC m=+0.154549464 container attach 84a354785c33d26db34c80cf287e427c4d76d3834775ddb9671d08ff942540c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_keller, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:29:42 np0005540825 charming_keller[289462]: {
Dec  1 05:29:42 np0005540825 charming_keller[289462]:    "1": [
Dec  1 05:29:42 np0005540825 charming_keller[289462]:        {
Dec  1 05:29:42 np0005540825 charming_keller[289462]:            "devices": [
Dec  1 05:29:42 np0005540825 charming_keller[289462]:                "/dev/loop3"
Dec  1 05:29:42 np0005540825 charming_keller[289462]:            ],
Dec  1 05:29:42 np0005540825 charming_keller[289462]:            "lv_name": "ceph_lv0",
Dec  1 05:29:42 np0005540825 charming_keller[289462]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:29:42 np0005540825 charming_keller[289462]:            "lv_size": "21470642176",
Dec  1 05:29:42 np0005540825 charming_keller[289462]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 05:29:42 np0005540825 charming_keller[289462]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:29:42 np0005540825 charming_keller[289462]:            "name": "ceph_lv0",
Dec  1 05:29:42 np0005540825 charming_keller[289462]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:29:42 np0005540825 charming_keller[289462]:            "tags": {
Dec  1 05:29:42 np0005540825 charming_keller[289462]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:29:42 np0005540825 charming_keller[289462]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:29:42 np0005540825 charming_keller[289462]:                "ceph.cephx_lockbox_secret": "",
Dec  1 05:29:42 np0005540825 charming_keller[289462]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 05:29:42 np0005540825 charming_keller[289462]:                "ceph.cluster_name": "ceph",
Dec  1 05:29:42 np0005540825 charming_keller[289462]:                "ceph.crush_device_class": "",
Dec  1 05:29:42 np0005540825 charming_keller[289462]:                "ceph.encrypted": "0",
Dec  1 05:29:42 np0005540825 charming_keller[289462]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 05:29:42 np0005540825 charming_keller[289462]:                "ceph.osd_id": "1",
Dec  1 05:29:42 np0005540825 charming_keller[289462]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 05:29:42 np0005540825 charming_keller[289462]:                "ceph.type": "block",
Dec  1 05:29:42 np0005540825 charming_keller[289462]:                "ceph.vdo": "0",
Dec  1 05:29:42 np0005540825 charming_keller[289462]:                "ceph.with_tpm": "0"
Dec  1 05:29:42 np0005540825 charming_keller[289462]:            },
Dec  1 05:29:42 np0005540825 charming_keller[289462]:            "type": "block",
Dec  1 05:29:42 np0005540825 charming_keller[289462]:            "vg_name": "ceph_vg0"
Dec  1 05:29:42 np0005540825 charming_keller[289462]:        }
Dec  1 05:29:42 np0005540825 charming_keller[289462]:    ]
Dec  1 05:29:42 np0005540825 charming_keller[289462]: }
Dec  1 05:29:42 np0005540825 systemd[1]: libpod-84a354785c33d26db34c80cf287e427c4d76d3834775ddb9671d08ff942540c7.scope: Deactivated successfully.
Dec  1 05:29:42 np0005540825 podman[289446]: 2025-12-01 10:29:42.283555989 +0000 UTC m=+0.520811918 container died 84a354785c33d26db34c80cf287e427c4d76d3834775ddb9671d08ff942540c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Dec  1 05:29:42 np0005540825 systemd[1]: var-lib-containers-storage-overlay-b9e61e12ecb03f0435ff3a1e944273a87d45c45fcff64802783f8095a0a7a067-merged.mount: Deactivated successfully.
Dec  1 05:29:42 np0005540825 podman[289446]: 2025-12-01 10:29:42.333162636 +0000 UTC m=+0.570418595 container remove 84a354785c33d26db34c80cf287e427c4d76d3834775ddb9671d08ff942540c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_keller, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  1 05:29:42 np0005540825 systemd[1]: libpod-conmon-84a354785c33d26db34c80cf287e427c4d76d3834775ddb9671d08ff942540c7.scope: Deactivated successfully.
Dec  1 05:29:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:29:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:29:42.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:29:42 np0005540825 podman[289577]: 2025-12-01 10:29:42.934280445 +0000 UTC m=+0.041712279 container create b1ebf3202e2f19554f8ab99f5e8a66b720ece12db457194df9a3f89d175480e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:29:42 np0005540825 systemd[1]: Started libpod-conmon-b1ebf3202e2f19554f8ab99f5e8a66b720ece12db457194df9a3f89d175480e8.scope.
Dec  1 05:29:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:29:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:29:42.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:29:42 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:29:43 np0005540825 podman[289577]: 2025-12-01 10:29:42.917759556 +0000 UTC m=+0.025191420 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:29:43 np0005540825 podman[289577]: 2025-12-01 10:29:43.013661712 +0000 UTC m=+0.121093556 container init b1ebf3202e2f19554f8ab99f5e8a66b720ece12db457194df9a3f89d175480e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_archimedes, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:29:43 np0005540825 nova_compute[256151]: 2025-12-01 10:29:43.014 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:29:43 np0005540825 podman[289577]: 2025-12-01 10:29:43.02112215 +0000 UTC m=+0.128553984 container start b1ebf3202e2f19554f8ab99f5e8a66b720ece12db457194df9a3f89d175480e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_archimedes, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:29:43 np0005540825 podman[289577]: 2025-12-01 10:29:43.024720546 +0000 UTC m=+0.132152410 container attach b1ebf3202e2f19554f8ab99f5e8a66b720ece12db457194df9a3f89d175480e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec  1 05:29:43 np0005540825 crazy_archimedes[289595]: 167 167
Dec  1 05:29:43 np0005540825 systemd[1]: libpod-b1ebf3202e2f19554f8ab99f5e8a66b720ece12db457194df9a3f89d175480e8.scope: Deactivated successfully.
Dec  1 05:29:43 np0005540825 podman[289577]: 2025-12-01 10:29:43.026494673 +0000 UTC m=+0.133926507 container died b1ebf3202e2f19554f8ab99f5e8a66b720ece12db457194df9a3f89d175480e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_archimedes, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:29:43 np0005540825 systemd[1]: var-lib-containers-storage-overlay-4e258579fed6d7d04addf54ea48f863b9078432209ed1cd94160e3730017796d-merged.mount: Deactivated successfully.
Dec  1 05:29:43 np0005540825 podman[289577]: 2025-12-01 10:29:43.063655249 +0000 UTC m=+0.171087083 container remove b1ebf3202e2f19554f8ab99f5e8a66b720ece12db457194df9a3f89d175480e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_archimedes, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:29:43 np0005540825 podman[289592]: 2025-12-01 10:29:43.070740877 +0000 UTC m=+0.089063425 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible)
Dec  1 05:29:43 np0005540825 systemd[1]: libpod-conmon-b1ebf3202e2f19554f8ab99f5e8a66b720ece12db457194df9a3f89d175480e8.scope: Deactivated successfully.
Dec  1 05:29:43 np0005540825 podman[289638]: 2025-12-01 10:29:43.218888321 +0000 UTC m=+0.047763520 container create 7c27ad4b9bd331d13849919330fd6d0a1dc5b31c06ef0f110be5081db95bbea0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_lederberg, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1)
Dec  1 05:29:43 np0005540825 systemd[1]: Started libpod-conmon-7c27ad4b9bd331d13849919330fd6d0a1dc5b31c06ef0f110be5081db95bbea0.scope.
Dec  1 05:29:43 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:29:43 np0005540825 nova_compute[256151]: 2025-12-01 10:29:43.286 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:29:43 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db5eb0f861333db794269f982ec5434cd0e46a6661d8bfd0b05ae3898bad51c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:29:43 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db5eb0f861333db794269f982ec5434cd0e46a6661d8bfd0b05ae3898bad51c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:29:43 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db5eb0f861333db794269f982ec5434cd0e46a6661d8bfd0b05ae3898bad51c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:29:43 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db5eb0f861333db794269f982ec5434cd0e46a6661d8bfd0b05ae3898bad51c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:29:43 np0005540825 podman[289638]: 2025-12-01 10:29:43.199710011 +0000 UTC m=+0.028585240 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:29:43 np0005540825 podman[289638]: 2025-12-01 10:29:43.312802684 +0000 UTC m=+0.141677893 container init 7c27ad4b9bd331d13849919330fd6d0a1dc5b31c06ef0f110be5081db95bbea0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_lederberg, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:29:43 np0005540825 podman[289638]: 2025-12-01 10:29:43.327268438 +0000 UTC m=+0.156143677 container start 7c27ad4b9bd331d13849919330fd6d0a1dc5b31c06ef0f110be5081db95bbea0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  1 05:29:43 np0005540825 podman[289638]: 2025-12-01 10:29:43.331146851 +0000 UTC m=+0.160022070 container attach 7c27ad4b9bd331d13849919330fd6d0a1dc5b31c06ef0f110be5081db95bbea0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec  1 05:29:43 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1176: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 546 B/s rd, 0 op/s
Dec  1 05:29:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:29:43.748Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:29:43 np0005540825 lvm[289731]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 05:29:43 np0005540825 lvm[289731]: VG ceph_vg0 finished
Dec  1 05:29:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:29:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:29:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:29:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:44 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:29:44 np0005540825 blissful_lederberg[289655]: {}
Dec  1 05:29:44 np0005540825 podman[289638]: 2025-12-01 10:29:44.076453708 +0000 UTC m=+0.905328907 container died 7c27ad4b9bd331d13849919330fd6d0a1dc5b31c06ef0f110be5081db95bbea0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_lederberg, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec  1 05:29:44 np0005540825 systemd[1]: libpod-7c27ad4b9bd331d13849919330fd6d0a1dc5b31c06ef0f110be5081db95bbea0.scope: Deactivated successfully.
Dec  1 05:29:44 np0005540825 systemd[1]: libpod-7c27ad4b9bd331d13849919330fd6d0a1dc5b31c06ef0f110be5081db95bbea0.scope: Consumed 1.213s CPU time.
Dec  1 05:29:44 np0005540825 systemd[1]: var-lib-containers-storage-overlay-db5eb0f861333db794269f982ec5434cd0e46a6661d8bfd0b05ae3898bad51c2-merged.mount: Deactivated successfully.
Dec  1 05:29:44 np0005540825 podman[289638]: 2025-12-01 10:29:44.121403031 +0000 UTC m=+0.950278230 container remove 7c27ad4b9bd331d13849919330fd6d0a1dc5b31c06ef0f110be5081db95bbea0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_lederberg, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:29:44 np0005540825 systemd[1]: libpod-conmon-7c27ad4b9bd331d13849919330fd6d0a1dc5b31c06ef0f110be5081db95bbea0.scope: Deactivated successfully.
Dec  1 05:29:44 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 05:29:44 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:29:44 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 05:29:44 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:29:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:29:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:29:44.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:29:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:29:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:29:44.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:29:45 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:29:45 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:29:45 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1177: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 819 B/s rd, 0 op/s
Dec  1 05:29:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:29:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:29:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:29:46.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:29:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:29:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:29:46.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:29:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:29:47.367Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:29:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:29:47.367Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:29:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:29:47.367Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:29:47 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Dec  1 05:29:47 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:29:47.448129) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  1 05:29:47 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Dec  1 05:29:47 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584987448174, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2718, "num_deletes": 506, "total_data_size": 4489788, "memory_usage": 4562672, "flush_reason": "Manual Compaction"}
Dec  1 05:29:47 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Dec  1 05:29:47 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584987469367, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 4347942, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31753, "largest_seqno": 34470, "table_properties": {"data_size": 4335363, "index_size": 7537, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3973, "raw_key_size": 33338, "raw_average_key_size": 21, "raw_value_size": 4306800, "raw_average_value_size": 2737, "num_data_blocks": 321, "num_entries": 1573, "num_filter_entries": 1573, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764584788, "oldest_key_time": 1764584788, "file_creation_time": 1764584987, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Dec  1 05:29:47 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 21507 microseconds, and 8342 cpu microseconds.
Dec  1 05:29:47 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 05:29:47 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:29:47.469632) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 4347942 bytes OK
Dec  1 05:29:47 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:29:47.469722) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Dec  1 05:29:47 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:29:47.471270) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Dec  1 05:29:47 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:29:47.471292) EVENT_LOG_v1 {"time_micros": 1764584987471284, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  1 05:29:47 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:29:47.471338) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  1 05:29:47 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 4476748, prev total WAL file size 4476748, number of live WAL files 2.
Dec  1 05:29:47 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:29:47 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:29:47.473664) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Dec  1 05:29:47 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  1 05:29:47 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(4246KB)], [68(13MB)]
Dec  1 05:29:47 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584987473699, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 18969832, "oldest_snapshot_seqno": -1}
Dec  1 05:29:47 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6812 keys, 16744464 bytes, temperature: kUnknown
Dec  1 05:29:47 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584987560974, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 16744464, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16697092, "index_size": 29212, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17093, "raw_key_size": 175852, "raw_average_key_size": 25, "raw_value_size": 16572905, "raw_average_value_size": 2432, "num_data_blocks": 1172, "num_entries": 6812, "num_filter_entries": 6812, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764582410, "oldest_key_time": 0, "file_creation_time": 1764584987, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Dec  1 05:29:47 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 05:29:47 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:29:47.561433) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 16744464 bytes
Dec  1 05:29:47 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:29:47.562682) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 217.1 rd, 191.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.1, 13.9 +0.0 blob) out(16.0 +0.0 blob), read-write-amplify(8.2) write-amplify(3.9) OK, records in: 7843, records dropped: 1031 output_compression: NoCompression
Dec  1 05:29:47 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:29:47.562699) EVENT_LOG_v1 {"time_micros": 1764584987562691, "job": 38, "event": "compaction_finished", "compaction_time_micros": 87375, "compaction_time_cpu_micros": 40610, "output_level": 6, "num_output_files": 1, "total_output_size": 16744464, "num_input_records": 7843, "num_output_records": 6812, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  1 05:29:47 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:29:47 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584987563559, "job": 38, "event": "table_file_deletion", "file_number": 70}
Dec  1 05:29:47 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:29:47 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764584987566143, "job": 38, "event": "table_file_deletion", "file_number": 68}
Dec  1 05:29:47 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:29:47.473564) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:29:47 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:29:47.566220) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:29:47 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:29:47.566225) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:29:47 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:29:47.566227) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:29:47 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:29:47.566228) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:29:47 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:29:47.566230) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:29:47 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1178: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 546 B/s rd, 0 op/s
Dec  1 05:29:48 np0005540825 nova_compute[256151]: 2025-12-01 10:29:48.015 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:29:48 np0005540825 nova_compute[256151]: 2025-12-01 10:29:48.287 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:29:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:29:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:29:48.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:29:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:29:48.883Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:29:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:29:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:29:48.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:29:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:29:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:29:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:29:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:49 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:29:49 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1179: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:29:50 np0005540825 nova_compute[256151]: 2025-12-01 10:29:50.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:29:50 np0005540825 nova_compute[256151]: 2025-12-01 10:29:50.027 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  1 05:29:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:29:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:29:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:29:50.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:29:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:29:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:29:50.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:29:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:29:51] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Dec  1 05:29:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:29:51] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Dec  1 05:29:51 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1180: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:29:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:29:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:29:52.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:29:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:29:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:29:52.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:29:53 np0005540825 nova_compute[256151]: 2025-12-01 10:29:53.017 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:29:53 np0005540825 nova_compute[256151]: 2025-12-01 10:29:53.288 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:29:53 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1181: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:29:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:29:53.749Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:29:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:29:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:29:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:54 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:29:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:54 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:29:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:29:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:29:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:29:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:29:54.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:29:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:29:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:29:55.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:29:55 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1182: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:29:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:29:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:29:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:29:56.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:29:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:29:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:29:57.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:29:57 np0005540825 podman[289809]: 2025-12-01 10:29:57.053130269 +0000 UTC m=+0.168103724 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 05:29:57 np0005540825 nova_compute[256151]: 2025-12-01 10:29:57.054 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:29:57 np0005540825 nova_compute[256151]: 2025-12-01 10:29:57.054 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 05:29:57 np0005540825 nova_compute[256151]: 2025-12-01 10:29:57.054 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 05:29:57 np0005540825 nova_compute[256151]: 2025-12-01 10:29:57.071 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 05:29:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:29:57.368Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:29:57 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1183: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:29:58 np0005540825 nova_compute[256151]: 2025-12-01 10:29:58.056 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:29:58 np0005540825 nova_compute[256151]: 2025-12-01 10:29:58.290 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:29:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:29:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:29:58.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:29:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:29:58.885Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:29:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:29:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:29:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:29:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:29:59 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:29:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:29:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:29:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:29:59.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:29:59 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1184: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore; 2 failed cephadm daemon(s)
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: log_channel(cluster) log [WRN] : [WRN] BLUESTORE_SLOW_OP_ALERT: 1 OSD(s) experiencing slow operations in BlueStore
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: log_channel(cluster) log [WRN] :      osd.2 observed slow operation indications in BlueStore
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_FAILED_DAEMON: 2 failed cephadm daemon(s)
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: log_channel(cluster) log [WRN] :     daemon nfs.cephfs.0.0.compute-1.osfnzc on compute-1 is in error state
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: log_channel(cluster) log [WRN] :     daemon nfs.cephfs.1.0.compute-2.ymqwfj on compute-2 is in error state
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: Health detail: HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore; 2 failed cephadm daemon(s)
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: [WRN] BLUESTORE_SLOW_OP_ALERT: 1 OSD(s) experiencing slow operations in BlueStore
Dec  1 05:30:00 np0005540825 ceph-mon[74416]:     osd.2 observed slow operation indications in BlueStore
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: [WRN] CEPHADM_FAILED_DAEMON: 2 failed cephadm daemon(s)
Dec  1 05:30:00 np0005540825 ceph-mon[74416]:    daemon nfs.cephfs.0.0.compute-1.osfnzc on compute-1 is in error state
Dec  1 05:30:00 np0005540825 ceph-mon[74416]:    daemon nfs.cephfs.1.0.compute-2.ymqwfj on compute-2 is in error state
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:30:00.733053) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764585000733199, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 365, "num_deletes": 251, "total_data_size": 269083, "memory_usage": 276520, "flush_reason": "Manual Compaction"}
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764585000737983, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 247709, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34472, "largest_seqno": 34835, "table_properties": {"data_size": 245482, "index_size": 391, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 6170, "raw_average_key_size": 20, "raw_value_size": 240987, "raw_average_value_size": 797, "num_data_blocks": 17, "num_entries": 302, "num_filter_entries": 302, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764584987, "oldest_key_time": 1764584987, "file_creation_time": 1764585000, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 5074 microseconds, and 1992 cpu microseconds.
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:30:00.738128) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 247709 bytes OK
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:30:00.738205) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:30:00.740123) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:30:00.740144) EVENT_LOG_v1 {"time_micros": 1764585000740137, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:30:00.740180) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 266673, prev total WAL file size 266673, number of live WAL files 2.
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:30:00.741321) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303034' seq:72057594037927935, type:22 .. '6D6772737461740031323536' seq:0, type:0; will stop at (end)
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(241KB)], [71(15MB)]
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764585000741385, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 16992173, "oldest_snapshot_seqno": -1}
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 6604 keys, 12891270 bytes, temperature: kUnknown
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764585000808153, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 12891270, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12850115, "index_size": 23571, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16517, "raw_key_size": 171758, "raw_average_key_size": 26, "raw_value_size": 12734326, "raw_average_value_size": 1928, "num_data_blocks": 937, "num_entries": 6604, "num_filter_entries": 6604, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764582410, "oldest_key_time": 0, "file_creation_time": 1764585000, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:30:00.808609) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 12891270 bytes
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:30:00.810758) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 254.0 rd, 192.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 16.0 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(120.6) write-amplify(52.0) OK, records in: 7114, records dropped: 510 output_compression: NoCompression
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:30:00.810814) EVENT_LOG_v1 {"time_micros": 1764585000810791, "job": 40, "event": "compaction_finished", "compaction_time_micros": 66897, "compaction_time_cpu_micros": 29259, "output_level": 6, "num_output_files": 1, "total_output_size": 12891270, "num_input_records": 7114, "num_output_records": 6604, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764585000811194, "job": 40, "event": "table_file_deletion", "file_number": 73}
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764585000816695, "job": 40, "event": "table_file_deletion", "file_number": 71}
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:30:00.741187) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:30:00.816831) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:30:00.816838) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:30:00.816842) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:30:00.816844) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:30:00 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:30:00.816847) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:30:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:30:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:30:00.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:30:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:30:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:30:01.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:30:01 np0005540825 nova_compute[256151]: 2025-12-01 10:30:01.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:30:01 np0005540825 nova_compute[256151]: 2025-12-01 10:30:01.028 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:30:01 np0005540825 nova_compute[256151]: 2025-12-01 10:30:01.028 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:30:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:30:01] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Dec  1 05:30:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:30:01] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Dec  1 05:30:01 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1185: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:30:02 np0005540825 nova_compute[256151]: 2025-12-01 10:30:02.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:30:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:30:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:30:02.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:30:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:30:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:30:03.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:30:03 np0005540825 nova_compute[256151]: 2025-12-01 10:30:03.058 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:30:03 np0005540825 nova_compute[256151]: 2025-12-01 10:30:03.292 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:30:03 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1186: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:30:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:30:03.750Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:30:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:30:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:30:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:30:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:04 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:30:04 np0005540825 nova_compute[256151]: 2025-12-01 10:30:04.022 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:30:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:30:04.588 163291 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:30:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:30:04.588 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:30:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:30:04.588 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:30:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:30:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:30:04.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:30:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:30:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:30:05.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:30:05 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1187: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:30:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:30:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:30:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:30:06.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:30:07 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:07 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:30:07 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:30:07.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:30:07 np0005540825 nova_compute[256151]: 2025-12-01 10:30:07.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:30:07 np0005540825 nova_compute[256151]: 2025-12-01 10:30:07.028 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 05:30:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:30:07.369Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:30:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:30:07.369Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:30:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:30:07.370Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:30:07 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1188: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:30:08 np0005540825 nova_compute[256151]: 2025-12-01 10:30:08.026 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:30:08 np0005540825 nova_compute[256151]: 2025-12-01 10:30:08.060 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:30:08 np0005540825 nova_compute[256151]: 2025-12-01 10:30:08.064 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:30:08 np0005540825 nova_compute[256151]: 2025-12-01 10:30:08.065 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:30:08 np0005540825 nova_compute[256151]: 2025-12-01 10:30:08.065 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:30:08 np0005540825 nova_compute[256151]: 2025-12-01 10:30:08.065 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 05:30:08 np0005540825 nova_compute[256151]: 2025-12-01 10:30:08.066 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:30:08 np0005540825 nova_compute[256151]: 2025-12-01 10:30:08.295 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:30:08 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:30:08 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4263249999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:30:08 np0005540825 nova_compute[256151]: 2025-12-01 10:30:08.508 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:30:08 np0005540825 nova_compute[256151]: 2025-12-01 10:30:08.683 256155 WARNING nova.virt.libvirt.driver [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 05:30:08 np0005540825 nova_compute[256151]: 2025-12-01 10:30:08.685 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4469MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 05:30:08 np0005540825 nova_compute[256151]: 2025-12-01 10:30:08.685 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:30:08 np0005540825 nova_compute[256151]: 2025-12-01 10:30:08.686 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:30:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:30:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:30:08.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:30:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:30:08.886Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:30:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:30:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:30:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:30:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:09 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:30:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:30:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:30:09.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:30:09 np0005540825 nova_compute[256151]: 2025-12-01 10:30:09.158 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 05:30:09 np0005540825 nova_compute[256151]: 2025-12-01 10:30:09.158 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 05:30:09 np0005540825 nova_compute[256151]: 2025-12-01 10:30:09.194 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:30:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:30:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:30:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:30:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1315727704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:30:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:30:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:30:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:30:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:30:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:30:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:30:09 np0005540825 nova_compute[256151]: 2025-12-01 10:30:09.648 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:30:09 np0005540825 nova_compute[256151]: 2025-12-01 10:30:09.653 256155 DEBUG nova.compute.provider_tree [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed in ProviderTree for provider: 5efe20fe-1981-4bd9-8786-d9fddc89a5ae update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 05:30:09 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1189: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:30:10 np0005540825 nova_compute[256151]: 2025-12-01 10:30:10.129 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 05:30:10 np0005540825 nova_compute[256151]: 2025-12-01 10:30:10.132 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 05:30:10 np0005540825 nova_compute[256151]: 2025-12-01 10:30:10.132 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.446s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:30:10 np0005540825 podman[289918]: 2025-12-01 10:30:10.194142095 +0000 UTC m=+0.062993813 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  1 05:30:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:30:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:30:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:30:10.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:30:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:30:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:30:11.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:30:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:30:11] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Dec  1 05:30:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:30:11] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Dec  1 05:30:11 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1190: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:30:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:30:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:30:12.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:30:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:30:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:30:13.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:30:13 np0005540825 nova_compute[256151]: 2025-12-01 10:30:13.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:30:13 np0005540825 nova_compute[256151]: 2025-12-01 10:30:13.028 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:30:13 np0005540825 nova_compute[256151]: 2025-12-01 10:30:13.062 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:30:13 np0005540825 podman[289940]: 2025-12-01 10:30:13.217737106 +0000 UTC m=+0.073776630 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 05:30:13 np0005540825 nova_compute[256151]: 2025-12-01 10:30:13.297 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:30:13 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1191: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:30:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:30:13.751Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:30:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:30:13.752Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:30:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:30:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:30:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:30:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:14 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:30:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:30:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:30:14.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:30:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:30:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:30:15.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:30:15 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1192: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:30:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:30:16 np0005540825 nova_compute[256151]: 2025-12-01 10:30:16.112 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:30:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:30:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:30:16.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:30:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:30:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:30:17.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:30:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:30:17.371Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:30:17 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1193: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:30:18 np0005540825 nova_compute[256151]: 2025-12-01 10:30:18.101 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:30:18 np0005540825 nova_compute[256151]: 2025-12-01 10:30:18.299 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:30:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:30:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:30:18.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:30:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:30:18.888Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:30:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:30:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:30:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:30:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:19 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:30:19 np0005540825 nova_compute[256151]: 2025-12-01 10:30:19.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:30:19 np0005540825 nova_compute[256151]: 2025-12-01 10:30:19.027 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  1 05:30:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:30:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:30:19.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:30:19 np0005540825 nova_compute[256151]: 2025-12-01 10:30:19.156 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  1 05:30:19 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1194: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:30:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:30:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:30:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:30:20.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:30:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:30:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:30:21.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:30:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:30:21] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Dec  1 05:30:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:30:21] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Dec  1 05:30:21 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1195: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:30:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:30:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:30:22.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:30:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:30:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:30:23.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:30:23 np0005540825 nova_compute[256151]: 2025-12-01 10:30:23.103 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:30:23 np0005540825 nova_compute[256151]: 2025-12-01 10:30:23.300 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:30:23 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1196: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:30:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:30:23.754Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:30:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:30:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:30:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:30:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:24 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:30:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:30:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:30:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:30:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:30:24.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:30:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:30:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:30:25.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:30:25 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1197: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:30:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:30:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:30:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:30:26.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:30:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:30:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:30:27.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:30:27 np0005540825 podman[289976]: 2025-12-01 10:30:27.27135467 +0000 UTC m=+0.129370246 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 05:30:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:30:27.372Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:30:27 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1198: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:30:28 np0005540825 nova_compute[256151]: 2025-12-01 10:30:28.106 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:30:28 np0005540825 nova_compute[256151]: 2025-12-01 10:30:28.330 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:30:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:30:28.888Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:30:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:30:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:30:28.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:30:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:30:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:30:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:30:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:29 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:30:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:30:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:30:29.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:30:29 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1199: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:30:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:30:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:30:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:30:30.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:30:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:30:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:30:31.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:30:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:30:31] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:30:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:30:31] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:30:31 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1200: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:30:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:30:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:30:32.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:30:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:30:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:30:33.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:30:33 np0005540825 nova_compute[256151]: 2025-12-01 10:30:33.107 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:30:33 np0005540825 nova_compute[256151]: 2025-12-01 10:30:33.331 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:30:33 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1201: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:30:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:30:33.755Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:30:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:30:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:34 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:30:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:34 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:30:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:34 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:30:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:30:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:30:34.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:30:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:30:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:30:35.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:30:35 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1202: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:30:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:30:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:30:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:30:36.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:30:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:30:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:30:37.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:30:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:30:37.373Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:30:37 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1203: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:30:38 np0005540825 nova_compute[256151]: 2025-12-01 10:30:38.108 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:30:38 np0005540825 nova_compute[256151]: 2025-12-01 10:30:38.332 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:30:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:30:38.889Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:30:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:30:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:30:38.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:30:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:30:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:30:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:30:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:39 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:30:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:30:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:30:39.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:30:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_10:30:39
Dec  1 05:30:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 05:30:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 05:30:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['backups', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', '.nfs', '.rgw.root', 'default.rgw.log', '.mgr', 'images', 'vms', 'cephfs.cephfs.data']
Dec  1 05:30:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 05:30:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:30:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:30:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:30:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:30:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:30:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:30:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:30:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:30:39 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1204: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:30:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 05:30:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:30:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:30:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:30:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:30:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:30:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:30:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:30:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:30:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:30:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  1 05:30:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:30:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:30:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:30:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:30:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:30:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:30:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:30:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:30:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:30:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:30:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:30:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:30:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:30:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:30:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 05:30:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:30:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 05:30:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:30:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:30:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:30:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:30:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:30:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:30:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:30:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:30:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:30:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:30:40.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:30:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:30:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:30:41.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:30:41 np0005540825 podman[290042]: 2025-12-01 10:30:41.198228427 +0000 UTC m=+0.060970140 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 05:30:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:30:41] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Dec  1 05:30:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:30:41] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Dec  1 05:30:41 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1205: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:30:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:30:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:30:42.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:30:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:30:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:30:43.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:30:43 np0005540825 nova_compute[256151]: 2025-12-01 10:30:43.111 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:30:43 np0005540825 nova_compute[256151]: 2025-12-01 10:30:43.380 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:30:43 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1206: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:30:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:30:43.755Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:30:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:30:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:30:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:30:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:44 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:30:44 np0005540825 podman[290068]: 2025-12-01 10:30:44.200022999 +0000 UTC m=+0.057956750 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible)
Dec  1 05:30:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:30:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:30:44.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:30:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:30:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:30:45.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:30:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:30:45 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:30:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 05:30:45 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:30:45 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1207: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 531 B/s rd, 0 op/s
Dec  1 05:30:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 05:30:45 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:30:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 05:30:45 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:30:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 05:30:45 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 05:30:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 05:30:45 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:30:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:30:45 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:30:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:30:45 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Dec  1 05:30:45 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:30:45.855954) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  1 05:30:45 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Dec  1 05:30:45 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764585045856051, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 628, "num_deletes": 250, "total_data_size": 801994, "memory_usage": 814008, "flush_reason": "Manual Compaction"}
Dec  1 05:30:45 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Dec  1 05:30:45 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764585045910211, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 786181, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34836, "largest_seqno": 35463, "table_properties": {"data_size": 782939, "index_size": 1150, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 6668, "raw_average_key_size": 16, "raw_value_size": 776411, "raw_average_value_size": 1950, "num_data_blocks": 51, "num_entries": 398, "num_filter_entries": 398, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764585001, "oldest_key_time": 1764585001, "file_creation_time": 1764585045, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Dec  1 05:30:45 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 54273 microseconds, and 3252 cpu microseconds.
Dec  1 05:30:45 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 05:30:45 np0005540825 podman[290265]: 2025-12-01 10:30:45.823902962 +0000 UTC m=+0.021621395 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:30:45 np0005540825 podman[290265]: 2025-12-01 10:30:45.936918162 +0000 UTC m=+0.134636565 container create e38905f4c98a537d042d2c17d325700681e489ca1841491e670f46d77d0ae15d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_bell, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:30:45 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:30:45.910265) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 786181 bytes OK
Dec  1 05:30:45 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:30:45.910287) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Dec  1 05:30:45 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:30:45.940824) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Dec  1 05:30:45 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:30:45.940862) EVENT_LOG_v1 {"time_micros": 1764585045940853, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  1 05:30:45 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:30:45.940884) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  1 05:30:45 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 798704, prev total WAL file size 798704, number of live WAL files 2.
Dec  1 05:30:45 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:30:45 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:30:45.941451) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323531' seq:72057594037927935, type:22 .. '6B7600353032' seq:0, type:0; will stop at (end)
Dec  1 05:30:45 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  1 05:30:45 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(767KB)], [74(12MB)]
Dec  1 05:30:45 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764585045941497, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 13677451, "oldest_snapshot_seqno": -1}
Dec  1 05:30:45 np0005540825 systemd[1]: Started libpod-conmon-e38905f4c98a537d042d2c17d325700681e489ca1841491e670f46d77d0ae15d.scope.
Dec  1 05:30:46 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6491 keys, 12313943 bytes, temperature: kUnknown
Dec  1 05:30:46 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764585046000701, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 12313943, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12273642, "index_size": 22975, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16261, "raw_key_size": 171085, "raw_average_key_size": 26, "raw_value_size": 12159661, "raw_average_value_size": 1873, "num_data_blocks": 899, "num_entries": 6491, "num_filter_entries": 6491, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764582410, "oldest_key_time": 0, "file_creation_time": 1764585045, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Dec  1 05:30:46 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 05:30:46 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:30:46.000958) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 12313943 bytes
Dec  1 05:30:46 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:30:46.002435) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 230.7 rd, 207.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 12.3 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(33.1) write-amplify(15.7) OK, records in: 7002, records dropped: 511 output_compression: NoCompression
Dec  1 05:30:46 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:30:46.002452) EVENT_LOG_v1 {"time_micros": 1764585046002444, "job": 42, "event": "compaction_finished", "compaction_time_micros": 59277, "compaction_time_cpu_micros": 27160, "output_level": 6, "num_output_files": 1, "total_output_size": 12313943, "num_input_records": 7002, "num_output_records": 6491, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  1 05:30:46 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:30:46 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764585046002692, "job": 42, "event": "table_file_deletion", "file_number": 76}
Dec  1 05:30:46 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:30:46 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764585046004710, "job": 42, "event": "table_file_deletion", "file_number": 74}
Dec  1 05:30:46 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:30:45.941381) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:30:46 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:30:46.004801) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:30:46 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:30:46.004807) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:30:46 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:30:46.004809) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:30:46 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:30:46.004811) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:30:46 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:30:46.004813) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:30:46 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:30:46 np0005540825 podman[290265]: 2025-12-01 10:30:46.021344544 +0000 UTC m=+0.219062967 container init e38905f4c98a537d042d2c17d325700681e489ca1841491e670f46d77d0ae15d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_bell, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec  1 05:30:46 np0005540825 podman[290265]: 2025-12-01 10:30:46.030371063 +0000 UTC m=+0.228089466 container start e38905f4c98a537d042d2c17d325700681e489ca1841491e670f46d77d0ae15d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:30:46 np0005540825 podman[290265]: 2025-12-01 10:30:46.035434408 +0000 UTC m=+0.233152841 container attach e38905f4c98a537d042d2c17d325700681e489ca1841491e670f46d77d0ae15d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_bell, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:30:46 np0005540825 optimistic_bell[290281]: 167 167
Dec  1 05:30:46 np0005540825 systemd[1]: libpod-e38905f4c98a537d042d2c17d325700681e489ca1841491e670f46d77d0ae15d.scope: Deactivated successfully.
Dec  1 05:30:46 np0005540825 podman[290265]: 2025-12-01 10:30:46.038102969 +0000 UTC m=+0.235821382 container died e38905f4c98a537d042d2c17d325700681e489ca1841491e670f46d77d0ae15d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  1 05:30:46 np0005540825 systemd[1]: var-lib-containers-storage-overlay-d6fa3746ab30f1e58e0b061476235add3d066bd7a48efdd50eacf4490d82a7d4-merged.mount: Deactivated successfully.
Dec  1 05:30:46 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:30:46 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:30:46 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:30:46 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:30:46 np0005540825 podman[290265]: 2025-12-01 10:30:46.074675259 +0000 UTC m=+0.272393662 container remove e38905f4c98a537d042d2c17d325700681e489ca1841491e670f46d77d0ae15d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_bell, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  1 05:30:46 np0005540825 systemd[1]: libpod-conmon-e38905f4c98a537d042d2c17d325700681e489ca1841491e670f46d77d0ae15d.scope: Deactivated successfully.
Dec  1 05:30:46 np0005540825 podman[290305]: 2025-12-01 10:30:46.294649038 +0000 UTC m=+0.108818779 container create c7c2f344cbceae9d5778cbb95a575b70e83facdad87d86cc910f572e8b88e83d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_ramanujan, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:30:46 np0005540825 podman[290305]: 2025-12-01 10:30:46.208698577 +0000 UTC m=+0.022868318 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:30:46 np0005540825 systemd[1]: Started libpod-conmon-c7c2f344cbceae9d5778cbb95a575b70e83facdad87d86cc910f572e8b88e83d.scope.
Dec  1 05:30:46 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:30:46 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ddca6d8528b265eb6f29c71a7b7543cce67b74a09925714f4f6b94fa4707b17/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:30:46 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ddca6d8528b265eb6f29c71a7b7543cce67b74a09925714f4f6b94fa4707b17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:30:46 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ddca6d8528b265eb6f29c71a7b7543cce67b74a09925714f4f6b94fa4707b17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:30:46 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ddca6d8528b265eb6f29c71a7b7543cce67b74a09925714f4f6b94fa4707b17/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:30:46 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ddca6d8528b265eb6f29c71a7b7543cce67b74a09925714f4f6b94fa4707b17/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:30:46 np0005540825 podman[290305]: 2025-12-01 10:30:46.407565846 +0000 UTC m=+0.221735577 container init c7c2f344cbceae9d5778cbb95a575b70e83facdad87d86cc910f572e8b88e83d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_ramanujan, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:30:46 np0005540825 podman[290305]: 2025-12-01 10:30:46.415870897 +0000 UTC m=+0.230040618 container start c7c2f344cbceae9d5778cbb95a575b70e83facdad87d86cc910f572e8b88e83d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_ramanujan, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  1 05:30:46 np0005540825 podman[290305]: 2025-12-01 10:30:46.419684228 +0000 UTC m=+0.233853969 container attach c7c2f344cbceae9d5778cbb95a575b70e83facdad87d86cc910f572e8b88e83d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_ramanujan, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:30:46 np0005540825 nervous_ramanujan[290321]: --> passed data devices: 0 physical, 1 LVM
Dec  1 05:30:46 np0005540825 nervous_ramanujan[290321]: --> All data devices are unavailable
Dec  1 05:30:46 np0005540825 systemd[1]: libpod-c7c2f344cbceae9d5778cbb95a575b70e83facdad87d86cc910f572e8b88e83d.scope: Deactivated successfully.
Dec  1 05:30:46 np0005540825 podman[290305]: 2025-12-01 10:30:46.795545797 +0000 UTC m=+0.609715538 container died c7c2f344cbceae9d5778cbb95a575b70e83facdad87d86cc910f572e8b88e83d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_ramanujan, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec  1 05:30:46 np0005540825 systemd[1]: var-lib-containers-storage-overlay-2ddca6d8528b265eb6f29c71a7b7543cce67b74a09925714f4f6b94fa4707b17-merged.mount: Deactivated successfully.
Dec  1 05:30:46 np0005540825 podman[290305]: 2025-12-01 10:30:46.841386184 +0000 UTC m=+0.655555895 container remove c7c2f344cbceae9d5778cbb95a575b70e83facdad87d86cc910f572e8b88e83d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_ramanujan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  1 05:30:46 np0005540825 systemd[1]: libpod-conmon-c7c2f344cbceae9d5778cbb95a575b70e83facdad87d86cc910f572e8b88e83d.scope: Deactivated successfully.
Dec  1 05:30:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:30:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:30:46.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:30:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:30:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:30:47.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:30:47 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1208: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 531 B/s rd, 0 op/s
Dec  1 05:30:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:30:47.374Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:30:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:30:47.377Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:30:47 np0005540825 podman[290440]: 2025-12-01 10:30:47.504788276 +0000 UTC m=+0.049831264 container create ae57fdad8187c47e8ed3310b88641e9f47b96218a3676bb5ab50248c31d0ce2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_mendeleev, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  1 05:30:47 np0005540825 systemd[1]: Started libpod-conmon-ae57fdad8187c47e8ed3310b88641e9f47b96218a3676bb5ab50248c31d0ce2e.scope.
Dec  1 05:30:47 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:30:47 np0005540825 podman[290440]: 2025-12-01 10:30:47.484427566 +0000 UTC m=+0.029470574 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:30:47 np0005540825 podman[290440]: 2025-12-01 10:30:47.683267264 +0000 UTC m=+0.228310272 container init ae57fdad8187c47e8ed3310b88641e9f47b96218a3676bb5ab50248c31d0ce2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Dec  1 05:30:47 np0005540825 podman[290440]: 2025-12-01 10:30:47.690152207 +0000 UTC m=+0.235195195 container start ae57fdad8187c47e8ed3310b88641e9f47b96218a3676bb5ab50248c31d0ce2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:30:47 np0005540825 gallant_mendeleev[290456]: 167 167
Dec  1 05:30:47 np0005540825 systemd[1]: libpod-ae57fdad8187c47e8ed3310b88641e9f47b96218a3676bb5ab50248c31d0ce2e.scope: Deactivated successfully.
Dec  1 05:30:47 np0005540825 podman[290440]: 2025-12-01 10:30:47.792626328 +0000 UTC m=+0.337669346 container attach ae57fdad8187c47e8ed3310b88641e9f47b96218a3676bb5ab50248c31d0ce2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_mendeleev, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  1 05:30:47 np0005540825 podman[290440]: 2025-12-01 10:30:47.793126761 +0000 UTC m=+0.338169809 container died ae57fdad8187c47e8ed3310b88641e9f47b96218a3676bb5ab50248c31d0ce2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  1 05:30:47 np0005540825 systemd[1]: var-lib-containers-storage-overlay-06585b6c7096eccd98d1779d122605ae0cab1d2ef6c87eb7ce4fcf00ca7e1b1f-merged.mount: Deactivated successfully.
Dec  1 05:30:47 np0005540825 podman[290440]: 2025-12-01 10:30:47.838807344 +0000 UTC m=+0.383850342 container remove ae57fdad8187c47e8ed3310b88641e9f47b96218a3676bb5ab50248c31d0ce2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_mendeleev, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:30:47 np0005540825 systemd[1]: libpod-conmon-ae57fdad8187c47e8ed3310b88641e9f47b96218a3676bb5ab50248c31d0ce2e.scope: Deactivated successfully.
Dec  1 05:30:48 np0005540825 podman[290484]: 2025-12-01 10:30:48.012514436 +0000 UTC m=+0.046323061 container create 20fdff150fc5c6e0a08cdd6b6590180c95c3b1674e86d2488f022679fd7001c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:30:48 np0005540825 systemd[1]: Started libpod-conmon-20fdff150fc5c6e0a08cdd6b6590180c95c3b1674e86d2488f022679fd7001c9.scope.
Dec  1 05:30:48 np0005540825 podman[290484]: 2025-12-01 10:30:47.990138422 +0000 UTC m=+0.023947017 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:30:48 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:30:48 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e897ce18e726a4a1b363ba7094a3d20c98ba80daf4bfde7cf1cdd1c6906e3da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:30:48 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e897ce18e726a4a1b363ba7094a3d20c98ba80daf4bfde7cf1cdd1c6906e3da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:30:48 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e897ce18e726a4a1b363ba7094a3d20c98ba80daf4bfde7cf1cdd1c6906e3da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:30:48 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e897ce18e726a4a1b363ba7094a3d20c98ba80daf4bfde7cf1cdd1c6906e3da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:30:48 np0005540825 podman[290484]: 2025-12-01 10:30:48.111610217 +0000 UTC m=+0.145418802 container init 20fdff150fc5c6e0a08cdd6b6590180c95c3b1674e86d2488f022679fd7001c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_swartz, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec  1 05:30:48 np0005540825 nova_compute[256151]: 2025-12-01 10:30:48.174 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:30:48 np0005540825 podman[290484]: 2025-12-01 10:30:48.178505593 +0000 UTC m=+0.212314188 container start 20fdff150fc5c6e0a08cdd6b6590180c95c3b1674e86d2488f022679fd7001c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_swartz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:30:48 np0005540825 podman[290484]: 2025-12-01 10:30:48.182572721 +0000 UTC m=+0.216381326 container attach 20fdff150fc5c6e0a08cdd6b6590180c95c3b1674e86d2488f022679fd7001c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_swartz, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:30:48 np0005540825 nova_compute[256151]: 2025-12-01 10:30:48.381 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:30:48 np0005540825 wonderful_swartz[290501]: {
Dec  1 05:30:48 np0005540825 wonderful_swartz[290501]:    "1": [
Dec  1 05:30:48 np0005540825 wonderful_swartz[290501]:        {
Dec  1 05:30:48 np0005540825 wonderful_swartz[290501]:            "devices": [
Dec  1 05:30:48 np0005540825 wonderful_swartz[290501]:                "/dev/loop3"
Dec  1 05:30:48 np0005540825 wonderful_swartz[290501]:            ],
Dec  1 05:30:48 np0005540825 wonderful_swartz[290501]:            "lv_name": "ceph_lv0",
Dec  1 05:30:48 np0005540825 wonderful_swartz[290501]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:30:48 np0005540825 wonderful_swartz[290501]:            "lv_size": "21470642176",
Dec  1 05:30:48 np0005540825 wonderful_swartz[290501]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 05:30:48 np0005540825 wonderful_swartz[290501]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:30:48 np0005540825 wonderful_swartz[290501]:            "name": "ceph_lv0",
Dec  1 05:30:48 np0005540825 wonderful_swartz[290501]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:30:48 np0005540825 wonderful_swartz[290501]:            "tags": {
Dec  1 05:30:48 np0005540825 wonderful_swartz[290501]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:30:48 np0005540825 wonderful_swartz[290501]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:30:48 np0005540825 wonderful_swartz[290501]:                "ceph.cephx_lockbox_secret": "",
Dec  1 05:30:48 np0005540825 wonderful_swartz[290501]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 05:30:48 np0005540825 wonderful_swartz[290501]:                "ceph.cluster_name": "ceph",
Dec  1 05:30:48 np0005540825 wonderful_swartz[290501]:                "ceph.crush_device_class": "",
Dec  1 05:30:48 np0005540825 wonderful_swartz[290501]:                "ceph.encrypted": "0",
Dec  1 05:30:48 np0005540825 wonderful_swartz[290501]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 05:30:48 np0005540825 wonderful_swartz[290501]:                "ceph.osd_id": "1",
Dec  1 05:30:48 np0005540825 wonderful_swartz[290501]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 05:30:48 np0005540825 wonderful_swartz[290501]:                "ceph.type": "block",
Dec  1 05:30:48 np0005540825 wonderful_swartz[290501]:                "ceph.vdo": "0",
Dec  1 05:30:48 np0005540825 wonderful_swartz[290501]:                "ceph.with_tpm": "0"
Dec  1 05:30:48 np0005540825 wonderful_swartz[290501]:            },
Dec  1 05:30:48 np0005540825 wonderful_swartz[290501]:            "type": "block",
Dec  1 05:30:48 np0005540825 wonderful_swartz[290501]:            "vg_name": "ceph_vg0"
Dec  1 05:30:48 np0005540825 wonderful_swartz[290501]:        }
Dec  1 05:30:48 np0005540825 wonderful_swartz[290501]:    ]
Dec  1 05:30:48 np0005540825 wonderful_swartz[290501]: }
Dec  1 05:30:48 np0005540825 systemd[1]: libpod-20fdff150fc5c6e0a08cdd6b6590180c95c3b1674e86d2488f022679fd7001c9.scope: Deactivated successfully.
Dec  1 05:30:48 np0005540825 podman[290484]: 2025-12-01 10:30:48.54905135 +0000 UTC m=+0.582859945 container died 20fdff150fc5c6e0a08cdd6b6590180c95c3b1674e86d2488f022679fd7001c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_swartz, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec  1 05:30:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:30:48.890Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:30:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:30:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:30:48.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:30:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:30:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:30:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:30:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:49 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:30:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:30:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:30:49.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:30:49 np0005540825 systemd[1]: var-lib-containers-storage-overlay-1e897ce18e726a4a1b363ba7094a3d20c98ba80daf4bfde7cf1cdd1c6906e3da-merged.mount: Deactivated successfully.
Dec  1 05:30:49 np0005540825 podman[290484]: 2025-12-01 10:30:49.173818157 +0000 UTC m=+1.207626782 container remove 20fdff150fc5c6e0a08cdd6b6590180c95c3b1674e86d2488f022679fd7001c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_swartz, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:30:49 np0005540825 systemd[1]: libpod-conmon-20fdff150fc5c6e0a08cdd6b6590180c95c3b1674e86d2488f022679fd7001c9.scope: Deactivated successfully.
Dec  1 05:30:49 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1209: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 531 B/s rd, 0 op/s
Dec  1 05:30:49 np0005540825 podman[290641]: 2025-12-01 10:30:49.810385506 +0000 UTC m=+0.047446441 container create 6e6767a16cb7b1e046658027a2464f9aed1dcc31cb9053ad1a9b9d7ad6581620 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Dec  1 05:30:49 np0005540825 systemd[1]: Started libpod-conmon-6e6767a16cb7b1e046658027a2464f9aed1dcc31cb9053ad1a9b9d7ad6581620.scope.
Dec  1 05:30:49 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:30:49 np0005540825 podman[290641]: 2025-12-01 10:30:49.788842434 +0000 UTC m=+0.025903409 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:30:49 np0005540825 podman[290641]: 2025-12-01 10:30:49.901899086 +0000 UTC m=+0.138960041 container init 6e6767a16cb7b1e046658027a2464f9aed1dcc31cb9053ad1a9b9d7ad6581620 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_heyrovsky, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:30:49 np0005540825 podman[290641]: 2025-12-01 10:30:49.910648818 +0000 UTC m=+0.147709753 container start 6e6767a16cb7b1e046658027a2464f9aed1dcc31cb9053ad1a9b9d7ad6581620 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_heyrovsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  1 05:30:49 np0005540825 podman[290641]: 2025-12-01 10:30:49.914465859 +0000 UTC m=+0.151526834 container attach 6e6767a16cb7b1e046658027a2464f9aed1dcc31cb9053ad1a9b9d7ad6581620 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  1 05:30:49 np0005540825 clever_heyrovsky[290658]: 167 167
Dec  1 05:30:49 np0005540825 systemd[1]: libpod-6e6767a16cb7b1e046658027a2464f9aed1dcc31cb9053ad1a9b9d7ad6581620.scope: Deactivated successfully.
Dec  1 05:30:49 np0005540825 podman[290641]: 2025-12-01 10:30:49.918928258 +0000 UTC m=+0.155989193 container died 6e6767a16cb7b1e046658027a2464f9aed1dcc31cb9053ad1a9b9d7ad6581620 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Dec  1 05:30:49 np0005540825 systemd[1]: var-lib-containers-storage-overlay-81d03f6f2419681584cf2a9616ea21c3ad8e06dba2525d364301d863ca087f3b-merged.mount: Deactivated successfully.
Dec  1 05:30:49 np0005540825 podman[290641]: 2025-12-01 10:30:49.960236524 +0000 UTC m=+0.197297449 container remove 6e6767a16cb7b1e046658027a2464f9aed1dcc31cb9053ad1a9b9d7ad6581620 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:30:49 np0005540825 systemd[1]: libpod-conmon-6e6767a16cb7b1e046658027a2464f9aed1dcc31cb9053ad1a9b9d7ad6581620.scope: Deactivated successfully.
Dec  1 05:30:50 np0005540825 podman[290683]: 2025-12-01 10:30:50.126575901 +0000 UTC m=+0.027893972 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:30:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:30:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:30:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:30:50.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:30:50 np0005540825 podman[290683]: 2025-12-01 10:30:50.990936909 +0000 UTC m=+0.892254960 container create 61d84e4754a4bd91668256fcf1e588d902361806e17a64a1060acb4d82bab977 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:30:51 np0005540825 systemd[1]: Started libpod-conmon-61d84e4754a4bd91668256fcf1e588d902361806e17a64a1060acb4d82bab977.scope.
Dec  1 05:30:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:30:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:30:51.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:30:51 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:30:51 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fdaff86340dfac51a4bb7bac4b41a4db04fbebd7d7d7594d65385aba32f9fcb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:30:51 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fdaff86340dfac51a4bb7bac4b41a4db04fbebd7d7d7594d65385aba32f9fcb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:30:51 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fdaff86340dfac51a4bb7bac4b41a4db04fbebd7d7d7594d65385aba32f9fcb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:30:51 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fdaff86340dfac51a4bb7bac4b41a4db04fbebd7d7d7594d65385aba32f9fcb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:30:51 np0005540825 podman[290683]: 2025-12-01 10:30:51.154054199 +0000 UTC m=+1.055372260 container init 61d84e4754a4bd91668256fcf1e588d902361806e17a64a1060acb4d82bab977 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_darwin, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Dec  1 05:30:51 np0005540825 podman[290683]: 2025-12-01 10:30:51.169551741 +0000 UTC m=+1.070869802 container start 61d84e4754a4bd91668256fcf1e588d902361806e17a64a1060acb4d82bab977 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_darwin, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  1 05:30:51 np0005540825 podman[290683]: 2025-12-01 10:30:51.174168353 +0000 UTC m=+1.075486414 container attach 61d84e4754a4bd91668256fcf1e588d902361806e17a64a1060acb4d82bab977 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_darwin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:30:51 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1210: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 797 B/s rd, 0 op/s
Dec  1 05:30:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:30:51] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Dec  1 05:30:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:30:51] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Dec  1 05:30:51 np0005540825 lvm[290776]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 05:30:51 np0005540825 lvm[290776]: VG ceph_vg0 finished
Dec  1 05:30:51 np0005540825 affectionate_darwin[290699]: {}
Dec  1 05:30:51 np0005540825 systemd[1]: libpod-61d84e4754a4bd91668256fcf1e588d902361806e17a64a1060acb4d82bab977.scope: Deactivated successfully.
Dec  1 05:30:51 np0005540825 systemd[1]: libpod-61d84e4754a4bd91668256fcf1e588d902361806e17a64a1060acb4d82bab977.scope: Consumed 1.303s CPU time.
Dec  1 05:30:51 np0005540825 podman[290683]: 2025-12-01 10:30:51.987500477 +0000 UTC m=+1.888818528 container died 61d84e4754a4bd91668256fcf1e588d902361806e17a64a1060acb4d82bab977 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_darwin, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  1 05:30:52 np0005540825 systemd[1]: var-lib-containers-storage-overlay-4fdaff86340dfac51a4bb7bac4b41a4db04fbebd7d7d7594d65385aba32f9fcb-merged.mount: Deactivated successfully.
Dec  1 05:30:52 np0005540825 podman[290683]: 2025-12-01 10:30:52.57599899 +0000 UTC m=+2.477317041 container remove 61d84e4754a4bd91668256fcf1e588d902361806e17a64a1060acb4d82bab977 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_darwin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:30:52 np0005540825 systemd[1]: libpod-conmon-61d84e4754a4bd91668256fcf1e588d902361806e17a64a1060acb4d82bab977.scope: Deactivated successfully.
Dec  1 05:30:52 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 05:30:52 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:30:52 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 05:30:52 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:30:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:30:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:30:52.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:30:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:30:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:30:53.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:30:53 np0005540825 nova_compute[256151]: 2025-12-01 10:30:53.177 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:30:53 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1211: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 531 B/s rd, 0 op/s
Dec  1 05:30:53 np0005540825 nova_compute[256151]: 2025-12-01 10:30:53.383 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:30:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:30:53.757Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:30:53 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:30:53 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:30:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:30:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:30:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:30:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:54 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:30:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:30:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:30:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:30:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:30:54.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:30:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:30:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:30:55.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:30:55 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1212: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 531 B/s rd, 0 op/s
Dec  1 05:30:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:30:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:30:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:30:56.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:30:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:30:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:30:57.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:30:57 np0005540825 nova_compute[256151]: 2025-12-01 10:30:57.157 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:30:57 np0005540825 nova_compute[256151]: 2025-12-01 10:30:57.157 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 05:30:57 np0005540825 nova_compute[256151]: 2025-12-01 10:30:57.157 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 05:30:57 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1213: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:30:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:30:57.377Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:30:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:30:57.377Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:30:57 np0005540825 nova_compute[256151]: 2025-12-01 10:30:57.453 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 05:30:57 np0005540825 radosgw[94538]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Dec  1 05:30:57 np0005540825 radosgw[94538]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Dec  1 05:30:57 np0005540825 radosgw[94538]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Dec  1 05:30:57 np0005540825 radosgw[94538]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Dec  1 05:30:58 np0005540825 radosgw[94538]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Dec  1 05:30:58 np0005540825 radosgw[94538]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Dec  1 05:30:58 np0005540825 nova_compute[256151]: 2025-12-01 10:30:58.178 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:30:58 np0005540825 podman[290827]: 2025-12-01 10:30:58.237037261 +0000 UTC m=+0.099508892 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 05:30:58 np0005540825 nova_compute[256151]: 2025-12-01 10:30:58.339 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:30:58 np0005540825 nova_compute[256151]: 2025-12-01 10:30:58.385 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:30:58 np0005540825 radosgw[94538]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Dec  1 05:30:58 np0005540825 radosgw[94538]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Dec  1 05:30:58 np0005540825 radosgw[94538]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Dec  1 05:30:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:30:58.891Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:30:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:30:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:30:58.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:30:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:30:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:30:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:30:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:30:59 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:30:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:30:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:30:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:30:59.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:30:59 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1214: 353 pgs: 353 active+clean; 41 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:31:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:31:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:31:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:31:00.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:31:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:31:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:31:01.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:31:01 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1215: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 0 B/s wr, 87 op/s
Dec  1 05:31:01 np0005540825 nova_compute[256151]: 2025-12-01 10:31:01.303 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:31:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:31:01] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:31:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:31:01] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:31:02 np0005540825 nova_compute[256151]: 2025-12-01 10:31:02.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:31:02 np0005540825 nova_compute[256151]: 2025-12-01 10:31:02.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:31:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:31:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:31:02.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:31:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:31:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:31:03.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:31:03 np0005540825 nova_compute[256151]: 2025-12-01 10:31:03.181 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:31:03 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1216: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 0 B/s wr, 87 op/s
Dec  1 05:31:03 np0005540825 nova_compute[256151]: 2025-12-01 10:31:03.386 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:31:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:31:03.758Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:31:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:31:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:31:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:31:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:04 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:31:04 np0005540825 nova_compute[256151]: 2025-12-01 10:31:04.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:31:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:31:04.589 163291 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:31:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:31:04.590 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:31:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:31:04.590 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:31:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:31:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:31:04.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:31:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:31:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:31:05.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:31:05 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1217: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 0 B/s wr, 87 op/s
Dec  1 05:31:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:31:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:31:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:31:06.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:31:07 np0005540825 nova_compute[256151]: 2025-12-01 10:31:07.026 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:31:07 np0005540825 nova_compute[256151]: 2025-12-01 10:31:07.026 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 05:31:07 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:07 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:31:07 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:31:07.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:31:07 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1218: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 0 B/s wr, 155 op/s
Dec  1 05:31:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:31:07.379Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:31:08 np0005540825 nova_compute[256151]: 2025-12-01 10:31:08.184 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:31:08 np0005540825 nova_compute[256151]: 2025-12-01 10:31:08.388 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:31:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:31:08.891Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:31:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:31:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:31:08.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:31:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:31:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:31:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:31:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:09 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:31:09 np0005540825 nova_compute[256151]: 2025-12-01 10:31:09.026 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:31:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:31:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:31:09.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:31:09 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1219: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 0 B/s wr, 154 op/s
Dec  1 05:31:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:31:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:31:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:31:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:31:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:31:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:31:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:31:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:31:10 np0005540825 nova_compute[256151]: 2025-12-01 10:31:10.518 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:31:10 np0005540825 nova_compute[256151]: 2025-12-01 10:31:10.519 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:31:10 np0005540825 nova_compute[256151]: 2025-12-01 10:31:10.519 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:31:10 np0005540825 nova_compute[256151]: 2025-12-01 10:31:10.519 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 05:31:10 np0005540825 nova_compute[256151]: 2025-12-01 10:31:10.520 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:31:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:31:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:31:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:31:10.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:31:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:31:10 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1853699093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:31:11 np0005540825 nova_compute[256151]: 2025-12-01 10:31:11.000 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:31:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:31:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:31:11.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:31:11 np0005540825 nova_compute[256151]: 2025-12-01 10:31:11.212 256155 WARNING nova.virt.libvirt.driver [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 05:31:11 np0005540825 nova_compute[256151]: 2025-12-01 10:31:11.213 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4463MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 05:31:11 np0005540825 nova_compute[256151]: 2025-12-01 10:31:11.213 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:31:11 np0005540825 nova_compute[256151]: 2025-12-01 10:31:11.213 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:31:11 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1220: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 0 B/s wr, 155 op/s
Dec  1 05:31:11 np0005540825 nova_compute[256151]: 2025-12-01 10:31:11.356 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 05:31:11 np0005540825 nova_compute[256151]: 2025-12-01 10:31:11.356 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 05:31:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:31:11] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Dec  1 05:31:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:31:11] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Dec  1 05:31:11 np0005540825 nova_compute[256151]: 2025-12-01 10:31:11.374 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Refreshing inventories for resource provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  1 05:31:11 np0005540825 nova_compute[256151]: 2025-12-01 10:31:11.462 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Updating ProviderTree inventory for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  1 05:31:11 np0005540825 nova_compute[256151]: 2025-12-01 10:31:11.463 256155 DEBUG nova.compute.provider_tree [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Updating inventory in ProviderTree for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 05:31:11 np0005540825 nova_compute[256151]: 2025-12-01 10:31:11.477 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Refreshing aggregate associations for resource provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  1 05:31:11 np0005540825 nova_compute[256151]: 2025-12-01 10:31:11.792 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Refreshing trait associations for resource provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae, traits: HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE4A,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_MMX,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_BMI,HW_CPU_X86_SVM,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AMD_SVM,HW_CPU_X86_BMI2,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE42,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,HW_CPU_X86_ABM,COMPUTE_SECURITY_UEFI_SECURE_BOOT _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  1 05:31:11 np0005540825 nova_compute[256151]: 2025-12-01 10:31:11.813 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:31:12 np0005540825 podman[290935]: 2025-12-01 10:31:12.226741545 +0000 UTC m=+0.080865548 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec  1 05:31:12 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:31:12 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3108768016' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:31:12 np0005540825 nova_compute[256151]: 2025-12-01 10:31:12.286 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:31:12 np0005540825 nova_compute[256151]: 2025-12-01 10:31:12.294 256155 DEBUG nova.compute.provider_tree [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed in ProviderTree for provider: 5efe20fe-1981-4bd9-8786-d9fddc89a5ae update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 05:31:12 np0005540825 nova_compute[256151]: 2025-12-01 10:31:12.323 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 05:31:12 np0005540825 nova_compute[256151]: 2025-12-01 10:31:12.325 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 05:31:12 np0005540825 nova_compute[256151]: 2025-12-01 10:31:12.326 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.112s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:31:12 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:12 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:31:12 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:31:12.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:31:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:31:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:31:13.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:31:13 np0005540825 nova_compute[256151]: 2025-12-01 10:31:13.186 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:31:13 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1221: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 0 B/s wr, 67 op/s
Dec  1 05:31:13 np0005540825 nova_compute[256151]: 2025-12-01 10:31:13.389 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:31:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:31:13.758Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:31:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:31:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:31:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:31:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:14 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:31:14 np0005540825 nova_compute[256151]: 2025-12-01 10:31:14.326 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:31:14 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:14 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:31:14 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:31:14.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:31:15 np0005540825 nova_compute[256151]: 2025-12-01 10:31:15.028 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:31:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:31:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:31:15.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:31:15 np0005540825 podman[290959]: 2025-12-01 10:31:15.211672781 +0000 UTC m=+0.079231734 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 05:31:15 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1222: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 0 B/s wr, 67 op/s
Dec  1 05:31:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:31:16 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:16 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:31:16 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:31:16.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:31:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:31:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:31:17.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:31:17 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1223: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 0 B/s wr, 68 op/s
Dec  1 05:31:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:31:17.380Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:31:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:31:17.380Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:31:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:31:17.380Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:31:18 np0005540825 nova_compute[256151]: 2025-12-01 10:31:18.187 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:31:18 np0005540825 nova_compute[256151]: 2025-12-01 10:31:18.391 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:31:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:31:18.893Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:31:18 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:18 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:31:18 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:31:18.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:31:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:31:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:31:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:31:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:19 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:31:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:31:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:31:19.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:31:19 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1224: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:31:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:31:20 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:20 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:31:20 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:31:20.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:31:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:31:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:31:21.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:31:21 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1225: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:31:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:31:21] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Dec  1 05:31:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:31:21] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Dec  1 05:31:22 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:22 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:31:22 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:31:22.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:31:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:31:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:31:23.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:31:23 np0005540825 nova_compute[256151]: 2025-12-01 10:31:23.188 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:31:23 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1226: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:31:23 np0005540825 nova_compute[256151]: 2025-12-01 10:31:23.393 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:31:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:31:23.760Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:31:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:31:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:31:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:31:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:24 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:31:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:31:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:31:24 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:24 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:31:24 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:31:24.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:31:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:31:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:31:25.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:31:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:31:25 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1227: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 734 B/s rd, 0 op/s
Dec  1 05:31:26 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:26 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:31:26 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:31:26.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:31:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:31:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:31:27.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:31:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:31:27.381Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:31:27 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1228: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 734 B/s rd, 0 op/s
Dec  1 05:31:28 np0005540825 nova_compute[256151]: 2025-12-01 10:31:28.190 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:31:28 np0005540825 nova_compute[256151]: 2025-12-01 10:31:28.394 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:31:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:31:28.893Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:31:28 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:28 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:31:28 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:31:28.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:31:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:31:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:31:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:31:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:29 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:31:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:31:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:31:29.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:31:29 np0005540825 podman[290994]: 2025-12-01 10:31:29.232585539 +0000 UTC m=+0.092962406 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  1 05:31:29 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1229: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 489 B/s rd, 0 op/s
Dec  1 05:31:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:31:30 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:30 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:31:30 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:31:30.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:31:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:31:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:31:31.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:31:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:31:31] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec  1 05:31:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:31:31] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec  1 05:31:31 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1230: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 734 B/s rd, 0 op/s
Dec  1 05:31:32 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:32 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:31:32 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:31:32.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:31:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:31:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:31:33.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:31:33 np0005540825 nova_compute[256151]: 2025-12-01 10:31:33.192 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:31:33 np0005540825 nova_compute[256151]: 2025-12-01 10:31:33.397 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:31:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:31:33.761Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:31:33 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1231: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 489 B/s rd, 0 op/s
Dec  1 05:31:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:31:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:31:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:31:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:34 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:31:34 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:34 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:31:34 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:31:34.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:31:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:31:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:31:35.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:31:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:31:35 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1232: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 734 B/s rd, 0 op/s
Dec  1 05:31:36 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:36 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:31:36 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:31:36.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:31:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:31:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:31:37.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:31:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:31:37.383Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:31:37 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1233: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:31:38 np0005540825 nova_compute[256151]: 2025-12-01 10:31:38.195 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:31:38 np0005540825 nova_compute[256151]: 2025-12-01 10:31:38.398 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:31:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:31:38.895Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:31:38 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:38 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:31:38 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:31:38.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:31:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:31:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:31:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:31:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:39 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:31:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:31:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:31:39.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:31:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_10:31:39
Dec  1 05:31:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 05:31:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 05:31:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['volumes', 'vms', '.nfs', '.mgr', '.rgw.root', 'backups', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta']
Dec  1 05:31:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 05:31:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:31:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:31:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:31:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:31:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:31:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:31:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:31:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:31:39 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1234: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:31:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 05:31:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:31:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:31:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:31:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:31:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:31:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:31:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:31:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:31:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:31:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  1 05:31:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:31:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:31:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:31:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:31:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:31:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:31:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:31:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:31:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:31:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:31:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:31:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:31:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:31:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:31:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 05:31:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:31:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 05:31:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:31:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:31:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:31:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:31:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:31:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:31:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:31:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:31:40 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:40 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:31:40 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:31:40.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:31:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:31:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:31:41.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:31:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:31:41] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec  1 05:31:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:31:41] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec  1 05:31:41 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1235: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:31:42 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:42 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:31:42 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:31:42.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:31:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:31:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:31:43.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:31:43 np0005540825 podman[291059]: 2025-12-01 10:31:43.189388861 +0000 UTC m=+0.051372639 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:31:43 np0005540825 nova_compute[256151]: 2025-12-01 10:31:43.198 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:31:43 np0005540825 nova_compute[256151]: 2025-12-01 10:31:43.400 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:31:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:31:43.762Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:31:43 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1236: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:31:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:44 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:31:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:44 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:31:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:44 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:31:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:44 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:31:44 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:44 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:31:44 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:31:44.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:31:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:31:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:31:45.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:31:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:31:45 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1237: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:31:46 np0005540825 podman[291082]: 2025-12-01 10:31:46.192506776 +0000 UTC m=+0.057239735 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2)
Dec  1 05:31:46 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:46 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:31:46 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:31:46.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:31:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:31:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:31:47.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:31:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:31:47.385Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:31:47 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1238: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:31:48 np0005540825 nova_compute[256151]: 2025-12-01 10:31:48.200 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:31:48 np0005540825 nova_compute[256151]: 2025-12-01 10:31:48.401 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:31:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:31:48.896Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:31:48 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:48 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:31:48 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:31:48.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:31:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:31:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:31:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:49 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:31:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:49 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:31:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:31:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:31:49.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:31:49 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1239: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:31:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:31:50 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:50 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:31:50 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:31:50.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:31:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:31:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:31:51.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:31:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:31:51] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec  1 05:31:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:31:51] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec  1 05:31:51 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1240: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:31:52 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:52 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:31:52 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:31:52.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:31:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:31:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:31:53.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:31:53 np0005540825 nova_compute[256151]: 2025-12-01 10:31:53.201 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:31:53 np0005540825 nova_compute[256151]: 2025-12-01 10:31:53.402 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:31:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:31:53.763Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:31:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:31:53.763Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:31:53 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:31:53 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:31:53 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 05:31:53 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:31:53 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1241: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 513 B/s rd, 0 op/s
Dec  1 05:31:53 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 05:31:53 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:31:53 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 05:31:53 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:31:53 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 05:31:53 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 05:31:53 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 05:31:53 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:31:53 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:31:53 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:31:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:54 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:31:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:54 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:31:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:54 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:31:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:54 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:31:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:31:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:31:54 np0005540825 podman[291309]: 2025-12-01 10:31:54.564470018 +0000 UTC m=+0.087180352 container create eb83e3a9cbed238bda81e334a4fcfb083a49242c1d1b309124a8e620e97f0b8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_snyder, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec  1 05:31:54 np0005540825 podman[291309]: 2025-12-01 10:31:54.503946447 +0000 UTC m=+0.026656801 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:31:54 np0005540825 systemd[1]: Started libpod-conmon-eb83e3a9cbed238bda81e334a4fcfb083a49242c1d1b309124a8e620e97f0b8d.scope.
Dec  1 05:31:54 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:31:54 np0005540825 podman[291309]: 2025-12-01 10:31:54.703030288 +0000 UTC m=+0.225740692 container init eb83e3a9cbed238bda81e334a4fcfb083a49242c1d1b309124a8e620e97f0b8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_snyder, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:31:54 np0005540825 podman[291309]: 2025-12-01 10:31:54.714567835 +0000 UTC m=+0.237278149 container start eb83e3a9cbed238bda81e334a4fcfb083a49242c1d1b309124a8e620e97f0b8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:31:54 np0005540825 podman[291309]: 2025-12-01 10:31:54.718685825 +0000 UTC m=+0.241396159 container attach eb83e3a9cbed238bda81e334a4fcfb083a49242c1d1b309124a8e620e97f0b8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:31:54 np0005540825 exciting_snyder[291325]: 167 167
Dec  1 05:31:54 np0005540825 systemd[1]: libpod-eb83e3a9cbed238bda81e334a4fcfb083a49242c1d1b309124a8e620e97f0b8d.scope: Deactivated successfully.
Dec  1 05:31:54 np0005540825 podman[291309]: 2025-12-01 10:31:54.722066065 +0000 UTC m=+0.244776409 container died eb83e3a9cbed238bda81e334a4fcfb083a49242c1d1b309124a8e620e97f0b8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:31:54 np0005540825 systemd[1]: var-lib-containers-storage-overlay-9c1a60e4db6551a0ee0cc4cea8046b27079ded4444d81847c6afa05485f5ecd8-merged.mount: Deactivated successfully.
Dec  1 05:31:54 np0005540825 podman[291309]: 2025-12-01 10:31:54.773453134 +0000 UTC m=+0.296163458 container remove eb83e3a9cbed238bda81e334a4fcfb083a49242c1d1b309124a8e620e97f0b8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  1 05:31:54 np0005540825 systemd[1]: libpod-conmon-eb83e3a9cbed238bda81e334a4fcfb083a49242c1d1b309124a8e620e97f0b8d.scope: Deactivated successfully.
Dec  1 05:31:54 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:31:54 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:31:54 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:31:54 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:31:54 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:54 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:31:54 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:31:54.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:31:54 np0005540825 podman[291349]: 2025-12-01 10:31:54.997544601 +0000 UTC m=+0.073693483 container create 9e7245e347e087d1102a7165fcd80d6d9b55b6aa46cca9ae92c692601018b2a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid)
Dec  1 05:31:55 np0005540825 systemd[1]: Started libpod-conmon-9e7245e347e087d1102a7165fcd80d6d9b55b6aa46cca9ae92c692601018b2a2.scope.
Dec  1 05:31:55 np0005540825 podman[291349]: 2025-12-01 10:31:54.970372427 +0000 UTC m=+0.046521379 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:31:55 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:31:55 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2648b20b353b15dc6c2f9ae98d85a2e231fb2bbaf773f8c5aa82626f217a8d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:31:55 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2648b20b353b15dc6c2f9ae98d85a2e231fb2bbaf773f8c5aa82626f217a8d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:31:55 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2648b20b353b15dc6c2f9ae98d85a2e231fb2bbaf773f8c5aa82626f217a8d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:31:55 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2648b20b353b15dc6c2f9ae98d85a2e231fb2bbaf773f8c5aa82626f217a8d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:31:55 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2648b20b353b15dc6c2f9ae98d85a2e231fb2bbaf773f8c5aa82626f217a8d6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:31:55 np0005540825 podman[291349]: 2025-12-01 10:31:55.120756481 +0000 UTC m=+0.196905353 container init 9e7245e347e087d1102a7165fcd80d6d9b55b6aa46cca9ae92c692601018b2a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:31:55 np0005540825 podman[291349]: 2025-12-01 10:31:55.137695172 +0000 UTC m=+0.213844084 container start 9e7245e347e087d1102a7165fcd80d6d9b55b6aa46cca9ae92c692601018b2a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:31:55 np0005540825 podman[291349]: 2025-12-01 10:31:55.142599933 +0000 UTC m=+0.218748835 container attach 9e7245e347e087d1102a7165fcd80d6d9b55b6aa46cca9ae92c692601018b2a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_elion, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:31:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:31:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:31:55.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:31:55 np0005540825 funny_elion[291365]: --> passed data devices: 0 physical, 1 LVM
Dec  1 05:31:55 np0005540825 funny_elion[291365]: --> All data devices are unavailable
Dec  1 05:31:55 np0005540825 systemd[1]: libpod-9e7245e347e087d1102a7165fcd80d6d9b55b6aa46cca9ae92c692601018b2a2.scope: Deactivated successfully.
Dec  1 05:31:55 np0005540825 podman[291349]: 2025-12-01 10:31:55.521947184 +0000 UTC m=+0.598096066 container died 9e7245e347e087d1102a7165fcd80d6d9b55b6aa46cca9ae92c692601018b2a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_elion, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:31:55 np0005540825 systemd[1]: var-lib-containers-storage-overlay-a2648b20b353b15dc6c2f9ae98d85a2e231fb2bbaf773f8c5aa82626f217a8d6-merged.mount: Deactivated successfully.
Dec  1 05:31:55 np0005540825 podman[291349]: 2025-12-01 10:31:55.581164681 +0000 UTC m=+0.657313563 container remove 9e7245e347e087d1102a7165fcd80d6d9b55b6aa46cca9ae92c692601018b2a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_elion, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  1 05:31:55 np0005540825 systemd[1]: libpod-conmon-9e7245e347e087d1102a7165fcd80d6d9b55b6aa46cca9ae92c692601018b2a2.scope: Deactivated successfully.
Dec  1 05:31:55 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1242: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 770 B/s rd, 0 op/s
Dec  1 05:31:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:31:56 np0005540825 podman[291485]: 2025-12-01 10:31:56.280463731 +0000 UTC m=+0.057700187 container create 58d65a2f24cec65af829a5bb76cb3cc5abbab1c2736a485cb190afd4c6edfafd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_bartik, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  1 05:31:56 np0005540825 systemd[1]: Started libpod-conmon-58d65a2f24cec65af829a5bb76cb3cc5abbab1c2736a485cb190afd4c6edfafd.scope.
Dec  1 05:31:56 np0005540825 podman[291485]: 2025-12-01 10:31:56.249400574 +0000 UTC m=+0.026637120 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:31:56 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:31:56 np0005540825 podman[291485]: 2025-12-01 10:31:56.371012682 +0000 UTC m=+0.148249158 container init 58d65a2f24cec65af829a5bb76cb3cc5abbab1c2736a485cb190afd4c6edfafd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_bartik, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec  1 05:31:56 np0005540825 podman[291485]: 2025-12-01 10:31:56.379273242 +0000 UTC m=+0.156509708 container start 58d65a2f24cec65af829a5bb76cb3cc5abbab1c2736a485cb190afd4c6edfafd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:31:56 np0005540825 podman[291485]: 2025-12-01 10:31:56.382443867 +0000 UTC m=+0.159680343 container attach 58d65a2f24cec65af829a5bb76cb3cc5abbab1c2736a485cb190afd4c6edfafd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_bartik, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:31:56 np0005540825 magical_bartik[291502]: 167 167
Dec  1 05:31:56 np0005540825 systemd[1]: libpod-58d65a2f24cec65af829a5bb76cb3cc5abbab1c2736a485cb190afd4c6edfafd.scope: Deactivated successfully.
Dec  1 05:31:56 np0005540825 podman[291485]: 2025-12-01 10:31:56.385548079 +0000 UTC m=+0.162784525 container died 58d65a2f24cec65af829a5bb76cb3cc5abbab1c2736a485cb190afd4c6edfafd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_bartik, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:31:56 np0005540825 systemd[1]: var-lib-containers-storage-overlay-e63d944561bc089e6af9ae7d9c913755152046b01b64e5fed45a3917a8580e53-merged.mount: Deactivated successfully.
Dec  1 05:31:56 np0005540825 podman[291485]: 2025-12-01 10:31:56.461106501 +0000 UTC m=+0.238342987 container remove 58d65a2f24cec65af829a5bb76cb3cc5abbab1c2736a485cb190afd4c6edfafd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_bartik, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:31:56 np0005540825 systemd[1]: libpod-conmon-58d65a2f24cec65af829a5bb76cb3cc5abbab1c2736a485cb190afd4c6edfafd.scope: Deactivated successfully.
Dec  1 05:31:56 np0005540825 podman[291526]: 2025-12-01 10:31:56.663590513 +0000 UTC m=+0.052163100 container create a67e9189cec81c541daff359b4f4868ad0ab609881c7a16cf50d8b9af517b973 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_hermann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:31:56 np0005540825 systemd[1]: Started libpod-conmon-a67e9189cec81c541daff359b4f4868ad0ab609881c7a16cf50d8b9af517b973.scope.
Dec  1 05:31:56 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:31:56 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c81b9a1bf65eec4fb0fce426fde39ee16d6aabbb27ddbd8c424ed57148868622/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:31:56 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c81b9a1bf65eec4fb0fce426fde39ee16d6aabbb27ddbd8c424ed57148868622/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:31:56 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c81b9a1bf65eec4fb0fce426fde39ee16d6aabbb27ddbd8c424ed57148868622/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:31:56 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c81b9a1bf65eec4fb0fce426fde39ee16d6aabbb27ddbd8c424ed57148868622/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:31:56 np0005540825 podman[291526]: 2025-12-01 10:31:56.645562073 +0000 UTC m=+0.034134680 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:31:56 np0005540825 podman[291526]: 2025-12-01 10:31:56.754093563 +0000 UTC m=+0.142666160 container init a67e9189cec81c541daff359b4f4868ad0ab609881c7a16cf50d8b9af517b973 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_hermann, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:31:56 np0005540825 podman[291526]: 2025-12-01 10:31:56.759656911 +0000 UTC m=+0.148229498 container start a67e9189cec81c541daff359b4f4868ad0ab609881c7a16cf50d8b9af517b973 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_hermann, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:31:56 np0005540825 podman[291526]: 2025-12-01 10:31:56.763602266 +0000 UTC m=+0.152174853 container attach a67e9189cec81c541daff359b4f4868ad0ab609881c7a16cf50d8b9af517b973 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_hermann, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec  1 05:31:56 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:56 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:31:56 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:31:56.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:31:57 np0005540825 nova_compute[256151]: 2025-12-01 10:31:57.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:31:57 np0005540825 nova_compute[256151]: 2025-12-01 10:31:57.028 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 05:31:57 np0005540825 nova_compute[256151]: 2025-12-01 10:31:57.028 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 05:31:57 np0005540825 brave_hermann[291542]: {
Dec  1 05:31:57 np0005540825 brave_hermann[291542]:    "1": [
Dec  1 05:31:57 np0005540825 brave_hermann[291542]:        {
Dec  1 05:31:57 np0005540825 brave_hermann[291542]:            "devices": [
Dec  1 05:31:57 np0005540825 brave_hermann[291542]:                "/dev/loop3"
Dec  1 05:31:57 np0005540825 brave_hermann[291542]:            ],
Dec  1 05:31:57 np0005540825 brave_hermann[291542]:            "lv_name": "ceph_lv0",
Dec  1 05:31:57 np0005540825 brave_hermann[291542]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:31:57 np0005540825 brave_hermann[291542]:            "lv_size": "21470642176",
Dec  1 05:31:57 np0005540825 brave_hermann[291542]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 05:31:57 np0005540825 brave_hermann[291542]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:31:57 np0005540825 brave_hermann[291542]:            "name": "ceph_lv0",
Dec  1 05:31:57 np0005540825 brave_hermann[291542]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:31:57 np0005540825 brave_hermann[291542]:            "tags": {
Dec  1 05:31:57 np0005540825 brave_hermann[291542]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:31:57 np0005540825 brave_hermann[291542]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:31:57 np0005540825 brave_hermann[291542]:                "ceph.cephx_lockbox_secret": "",
Dec  1 05:31:57 np0005540825 brave_hermann[291542]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 05:31:57 np0005540825 brave_hermann[291542]:                "ceph.cluster_name": "ceph",
Dec  1 05:31:57 np0005540825 brave_hermann[291542]:                "ceph.crush_device_class": "",
Dec  1 05:31:57 np0005540825 brave_hermann[291542]:                "ceph.encrypted": "0",
Dec  1 05:31:57 np0005540825 brave_hermann[291542]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 05:31:57 np0005540825 brave_hermann[291542]:                "ceph.osd_id": "1",
Dec  1 05:31:57 np0005540825 brave_hermann[291542]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 05:31:57 np0005540825 brave_hermann[291542]:                "ceph.type": "block",
Dec  1 05:31:57 np0005540825 brave_hermann[291542]:                "ceph.vdo": "0",
Dec  1 05:31:57 np0005540825 brave_hermann[291542]:                "ceph.with_tpm": "0"
Dec  1 05:31:57 np0005540825 brave_hermann[291542]:            },
Dec  1 05:31:57 np0005540825 brave_hermann[291542]:            "type": "block",
Dec  1 05:31:57 np0005540825 brave_hermann[291542]:            "vg_name": "ceph_vg0"
Dec  1 05:31:57 np0005540825 brave_hermann[291542]:        }
Dec  1 05:31:57 np0005540825 brave_hermann[291542]:    ]
Dec  1 05:31:57 np0005540825 brave_hermann[291542]: }
Dec  1 05:31:57 np0005540825 systemd[1]: libpod-a67e9189cec81c541daff359b4f4868ad0ab609881c7a16cf50d8b9af517b973.scope: Deactivated successfully.
Dec  1 05:31:57 np0005540825 podman[291526]: 2025-12-01 10:31:57.127942908 +0000 UTC m=+0.516515535 container died a67e9189cec81c541daff359b4f4868ad0ab609881c7a16cf50d8b9af517b973 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_hermann, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Dec  1 05:31:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:31:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:31:57.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:31:57 np0005540825 systemd[1]: var-lib-containers-storage-overlay-c81b9a1bf65eec4fb0fce426fde39ee16d6aabbb27ddbd8c424ed57148868622-merged.mount: Deactivated successfully.
Dec  1 05:31:57 np0005540825 podman[291526]: 2025-12-01 10:31:57.266657471 +0000 UTC m=+0.655230088 container remove a67e9189cec81c541daff359b4f4868ad0ab609881c7a16cf50d8b9af517b973 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec  1 05:31:57 np0005540825 systemd[1]: libpod-conmon-a67e9189cec81c541daff359b4f4868ad0ab609881c7a16cf50d8b9af517b973.scope: Deactivated successfully.
Dec  1 05:31:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:31:57.386Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:31:57 np0005540825 nova_compute[256151]: 2025-12-01 10:31:57.459 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 05:31:57 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1243: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 513 B/s rd, 0 op/s
Dec  1 05:31:57 np0005540825 podman[291657]: 2025-12-01 10:31:57.88067038 +0000 UTC m=+0.046717625 container create 677d5f1bf53ba8bf3ec3d1e676e4261c2f54ca6d6b695c17c8b366e3e46b2a63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_wright, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:31:57 np0005540825 systemd[1]: Started libpod-conmon-677d5f1bf53ba8bf3ec3d1e676e4261c2f54ca6d6b695c17c8b366e3e46b2a63.scope.
Dec  1 05:31:57 np0005540825 podman[291657]: 2025-12-01 10:31:57.856086716 +0000 UTC m=+0.022133971 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:31:57 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:31:57 np0005540825 podman[291657]: 2025-12-01 10:31:57.998788225 +0000 UTC m=+0.164835460 container init 677d5f1bf53ba8bf3ec3d1e676e4261c2f54ca6d6b695c17c8b366e3e46b2a63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_wright, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec  1 05:31:58 np0005540825 podman[291657]: 2025-12-01 10:31:58.007881438 +0000 UTC m=+0.173928643 container start 677d5f1bf53ba8bf3ec3d1e676e4261c2f54ca6d6b695c17c8b366e3e46b2a63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_wright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec  1 05:31:58 np0005540825 loving_wright[291674]: 167 167
Dec  1 05:31:58 np0005540825 systemd[1]: libpod-677d5f1bf53ba8bf3ec3d1e676e4261c2f54ca6d6b695c17c8b366e3e46b2a63.scope: Deactivated successfully.
Dec  1 05:31:58 np0005540825 podman[291657]: 2025-12-01 10:31:58.013391944 +0000 UTC m=+0.179439149 container attach 677d5f1bf53ba8bf3ec3d1e676e4261c2f54ca6d6b695c17c8b366e3e46b2a63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_wright, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:31:58 np0005540825 podman[291657]: 2025-12-01 10:31:58.013729313 +0000 UTC m=+0.179776518 container died 677d5f1bf53ba8bf3ec3d1e676e4261c2f54ca6d6b695c17c8b366e3e46b2a63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_wright, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec  1 05:31:58 np0005540825 systemd[1]: var-lib-containers-storage-overlay-3db76911fdbd72a9c303b852d663d3a9142d79c1fd45cd5b6deee0e5ff7a1b55-merged.mount: Deactivated successfully.
Dec  1 05:31:58 np0005540825 podman[291657]: 2025-12-01 10:31:58.069869098 +0000 UTC m=+0.235916343 container remove 677d5f1bf53ba8bf3ec3d1e676e4261c2f54ca6d6b695c17c8b366e3e46b2a63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:31:58 np0005540825 systemd[1]: libpod-conmon-677d5f1bf53ba8bf3ec3d1e676e4261c2f54ca6d6b695c17c8b366e3e46b2a63.scope: Deactivated successfully.
Dec  1 05:31:58 np0005540825 nova_compute[256151]: 2025-12-01 10:31:58.204 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:31:58 np0005540825 podman[291699]: 2025-12-01 10:31:58.276981733 +0000 UTC m=+0.062516845 container create ea06915709960aad7663455d2532339db9290f10f8edda7ee6c857a684586049 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_wilbur, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:31:58 np0005540825 systemd[1]: Started libpod-conmon-ea06915709960aad7663455d2532339db9290f10f8edda7ee6c857a684586049.scope.
Dec  1 05:31:58 np0005540825 podman[291699]: 2025-12-01 10:31:58.245224708 +0000 UTC m=+0.030759650 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:31:58 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:31:58 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ba746c7c398f39cc0246671e952a584c31ae2a096e0c546aca0f8dcf46368c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:31:58 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ba746c7c398f39cc0246671e952a584c31ae2a096e0c546aca0f8dcf46368c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:31:58 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ba746c7c398f39cc0246671e952a584c31ae2a096e0c546aca0f8dcf46368c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:31:58 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ba746c7c398f39cc0246671e952a584c31ae2a096e0c546aca0f8dcf46368c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:31:58 np0005540825 podman[291699]: 2025-12-01 10:31:58.377000446 +0000 UTC m=+0.162535458 container init ea06915709960aad7663455d2532339db9290f10f8edda7ee6c857a684586049 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_wilbur, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  1 05:31:58 np0005540825 podman[291699]: 2025-12-01 10:31:58.388332458 +0000 UTC m=+0.173867370 container start ea06915709960aad7663455d2532339db9290f10f8edda7ee6c857a684586049 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec  1 05:31:58 np0005540825 podman[291699]: 2025-12-01 10:31:58.392060597 +0000 UTC m=+0.177595619 container attach ea06915709960aad7663455d2532339db9290f10f8edda7ee6c857a684586049 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_wilbur, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec  1 05:31:58 np0005540825 nova_compute[256151]: 2025-12-01 10:31:58.404 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:31:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:31:58.897Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:31:58 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:58 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:31:58 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:31:58.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:31:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:31:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:31:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:31:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:31:59 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:31:59 np0005540825 lvm[291789]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 05:31:59 np0005540825 lvm[291789]: VG ceph_vg0 finished
Dec  1 05:31:59 np0005540825 sharp_wilbur[291715]: {}
Dec  1 05:31:59 np0005540825 systemd[1]: libpod-ea06915709960aad7663455d2532339db9290f10f8edda7ee6c857a684586049.scope: Deactivated successfully.
Dec  1 05:31:59 np0005540825 systemd[1]: libpod-ea06915709960aad7663455d2532339db9290f10f8edda7ee6c857a684586049.scope: Consumed 1.159s CPU time.
Dec  1 05:31:59 np0005540825 podman[291699]: 2025-12-01 10:31:59.130390777 +0000 UTC m=+0.915925779 container died ea06915709960aad7663455d2532339db9290f10f8edda7ee6c857a684586049 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_wilbur, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:31:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:31:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:31:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:31:59.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:31:59 np0005540825 systemd[1]: var-lib-containers-storage-overlay-9ba746c7c398f39cc0246671e952a584c31ae2a096e0c546aca0f8dcf46368c3-merged.mount: Deactivated successfully.
Dec  1 05:31:59 np0005540825 podman[291699]: 2025-12-01 10:31:59.199263321 +0000 UTC m=+0.984798223 container remove ea06915709960aad7663455d2532339db9290f10f8edda7ee6c857a684586049 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_wilbur, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  1 05:31:59 np0005540825 systemd[1]: libpod-conmon-ea06915709960aad7663455d2532339db9290f10f8edda7ee6c857a684586049.scope: Deactivated successfully.
Dec  1 05:31:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 05:31:59 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:31:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 05:31:59 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:31:59 np0005540825 podman[291830]: 2025-12-01 10:31:59.486291054 +0000 UTC m=+0.101539175 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec  1 05:31:59 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1244: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 513 B/s rd, 0 op/s
Dec  1 05:32:00 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:32:00 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:32:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:32:00 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:00 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:32:00 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:32:00.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:32:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:32:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:32:01.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:32:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:32:01] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec  1 05:32:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:32:01] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec  1 05:32:01 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1245: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 770 B/s rd, 0 op/s
Dec  1 05:32:02 np0005540825 nova_compute[256151]: 2025-12-01 10:32:02.026 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:32:02 np0005540825 nova_compute[256151]: 2025-12-01 10:32:02.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:32:02 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:02 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:32:02 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:32:02.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:32:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:32:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:32:03.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:32:03 np0005540825 nova_compute[256151]: 2025-12-01 10:32:03.208 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:32:03 np0005540825 nova_compute[256151]: 2025-12-01 10:32:03.407 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:32:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:32:03.764Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:32:03 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1246: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 513 B/s rd, 0 op/s
Dec  1 05:32:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:32:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:32:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:32:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:04 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:32:04 np0005540825 nova_compute[256151]: 2025-12-01 10:32:04.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:32:04 np0005540825 nova_compute[256151]: 2025-12-01 10:32:04.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:32:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:32:04.590 163291 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:32:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:32:04.591 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:32:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:32:04.591 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:32:04 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:04 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:32:04 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:32:04.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:32:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:32:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:32:05.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:32:05 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1247: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:32:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:32:06 np0005540825 nova_compute[256151]: 2025-12-01 10:32:06.022 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:32:06 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:06 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:32:06 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:32:06.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:32:07 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:07 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:32:07 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:32:07.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:32:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:32:07.387Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:32:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:32:07.387Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:32:07 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1248: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:32:08 np0005540825 nova_compute[256151]: 2025-12-01 10:32:08.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:32:08 np0005540825 nova_compute[256151]: 2025-12-01 10:32:08.027 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 05:32:08 np0005540825 nova_compute[256151]: 2025-12-01 10:32:08.207 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:32:08 np0005540825 nova_compute[256151]: 2025-12-01 10:32:08.409 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:32:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:32:08.899Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:32:08 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:08 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:32:08 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:32:08.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:32:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:32:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:32:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:32:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:09 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:32:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:32:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:32:09.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:32:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:32:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:32:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:32:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:32:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:32:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:32:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:32:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:32:09 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1249: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:32:10 np0005540825 nova_compute[256151]: 2025-12-01 10:32:10.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:32:10 np0005540825 nova_compute[256151]: 2025-12-01 10:32:10.058 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:32:10 np0005540825 nova_compute[256151]: 2025-12-01 10:32:10.058 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:32:10 np0005540825 nova_compute[256151]: 2025-12-01 10:32:10.058 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:32:10 np0005540825 nova_compute[256151]: 2025-12-01 10:32:10.058 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 05:32:10 np0005540825 nova_compute[256151]: 2025-12-01 10:32:10.059 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:32:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:32:10 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/528854548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:32:10 np0005540825 nova_compute[256151]: 2025-12-01 10:32:10.575 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:32:10 np0005540825 nova_compute[256151]: 2025-12-01 10:32:10.781 256155 WARNING nova.virt.libvirt.driver [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 05:32:10 np0005540825 nova_compute[256151]: 2025-12-01 10:32:10.783 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4458MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 05:32:10 np0005540825 nova_compute[256151]: 2025-12-01 10:32:10.783 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:32:10 np0005540825 nova_compute[256151]: 2025-12-01 10:32:10.784 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:32:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:32:10 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:10 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:32:10 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:32:10.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:32:11 np0005540825 nova_compute[256151]: 2025-12-01 10:32:11.035 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 05:32:11 np0005540825 nova_compute[256151]: 2025-12-01 10:32:11.036 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 05:32:11 np0005540825 nova_compute[256151]: 2025-12-01 10:32:11.083 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:32:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:32:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:32:11.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:32:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:32:11] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec  1 05:32:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:32:11] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec  1 05:32:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:32:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1989639442' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:32:11 np0005540825 nova_compute[256151]: 2025-12-01 10:32:11.628 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:32:11 np0005540825 nova_compute[256151]: 2025-12-01 10:32:11.638 256155 DEBUG nova.compute.provider_tree [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed in ProviderTree for provider: 5efe20fe-1981-4bd9-8786-d9fddc89a5ae update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 05:32:11 np0005540825 nova_compute[256151]: 2025-12-01 10:32:11.670 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 05:32:11 np0005540825 nova_compute[256151]: 2025-12-01 10:32:11.671 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 05:32:11 np0005540825 nova_compute[256151]: 2025-12-01 10:32:11.671 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.888s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:32:11 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1250: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:32:12 np0005540825 nova_compute[256151]: 2025-12-01 10:32:12.672 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:32:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:32:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:32:12.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:32:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:32:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:32:13.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:32:13 np0005540825 nova_compute[256151]: 2025-12-01 10:32:13.210 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:32:13 np0005540825 nova_compute[256151]: 2025-12-01 10:32:13.445 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:32:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:32:13.766Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:32:13 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1251: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:32:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:32:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:32:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:32:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:14 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:32:14 np0005540825 podman[291944]: 2025-12-01 10:32:14.221747869 +0000 UTC m=+0.085696583 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  1 05:32:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:32:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:32:15.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:32:15 np0005540825 nova_compute[256151]: 2025-12-01 10:32:15.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:32:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:32:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:32:15.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:32:15 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1252: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:32:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:32:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:32:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:32:17.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:32:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:32:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:32:17.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:32:17 np0005540825 podman[291965]: 2025-12-01 10:32:17.1996051 +0000 UTC m=+0.062836044 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 05:32:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:32:17.389Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:32:17 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1253: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:32:18 np0005540825 nova_compute[256151]: 2025-12-01 10:32:18.212 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:32:18 np0005540825 nova_compute[256151]: 2025-12-01 10:32:18.447 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:32:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:32:18.900Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:32:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:32:18.900Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:32:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:32:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:32:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:32:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:19 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:32:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:32:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:32:19.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:32:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:32:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:32:19.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:32:19 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1254: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:32:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:32:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:32:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:32:21.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:32:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:32:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:32:21.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:32:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:32:21] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec  1 05:32:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:32:21] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec  1 05:32:21 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1255: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:32:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:32:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:32:23.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:32:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:32:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:32:23.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:32:23 np0005540825 nova_compute[256151]: 2025-12-01 10:32:23.215 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:32:23 np0005540825 nova_compute[256151]: 2025-12-01 10:32:23.450 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:32:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:32:23.767Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:32:23 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1256: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:32:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:32:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:24 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:32:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:24 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:32:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:24 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:32:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:32:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:32:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:32:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:32:25.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:32:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:32:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:32:25.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:32:25 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1257: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:32:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:32:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:32:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:32:27.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:32:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:32:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:32:27.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:32:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:32:27.389Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:32:27 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1258: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:32:28 np0005540825 nova_compute[256151]: 2025-12-01 10:32:28.217 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:32:28 np0005540825 nova_compute[256151]: 2025-12-01 10:32:28.451 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:32:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:32:28.901Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:32:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:32:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:32:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:32:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:29 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:32:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:32:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:32:29.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:32:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:32:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:32:29.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:32:29 np0005540825 podman[292021]: 2025-12-01 10:32:29.739762629 +0000 UTC m=+0.116722089 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 05:32:29 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1259: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:32:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:32:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:32:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:32:31.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:32:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:32:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:32:31.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:32:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:32:31] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec  1 05:32:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:32:31] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Dec  1 05:32:31 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1260: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:32:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:32:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:32:33.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:32:33 np0005540825 nova_compute[256151]: 2025-12-01 10:32:33.219 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:32:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:32:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:32:33.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:32:33 np0005540825 nova_compute[256151]: 2025-12-01 10:32:33.453 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:32:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:32:33.769Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:32:33 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1261: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:32:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:32:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:32:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:32:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:34 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:32:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:32:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:32:35.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:32:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:32:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:32:35.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:32:35 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1262: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:32:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:32:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:32:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:32:37.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:32:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:32:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:32:37.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:32:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:32:37.390Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:32:37 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1263: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:32:38 np0005540825 nova_compute[256151]: 2025-12-01 10:32:38.221 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:32:38 np0005540825 nova_compute[256151]: 2025-12-01 10:32:38.455 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:32:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:32:38.902Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:32:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:32:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:32:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:32:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:39 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:32:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:32:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:32:39.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:32:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:32:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:32:39.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:32:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_10:32:39
Dec  1 05:32:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 05:32:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 05:32:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['volumes', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', '.mgr', '.nfs', '.rgw.root']
Dec  1 05:32:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 05:32:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:32:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:32:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:32:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:32:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:32:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:32:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:32:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:32:39 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1264: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:32:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 05:32:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:32:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:32:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:32:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:32:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:32:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:32:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:32:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:32:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:32:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  1 05:32:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:32:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:32:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:32:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:32:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:32:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:32:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:32:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:32:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:32:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:32:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:32:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:32:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:32:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:32:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 05:32:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:32:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 05:32:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:32:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:32:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:32:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:32:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:32:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:32:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:32:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:32:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:32:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:32:41.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:32:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:32:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:32:41.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:32:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:32:41] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:32:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:32:41] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:32:41 np0005540825 ceph-mgr[74709]: [devicehealth INFO root] Check health
Dec  1 05:32:41 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1265: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:32:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:32:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:32:43.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:32:43 np0005540825 nova_compute[256151]: 2025-12-01 10:32:43.223 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:32:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:32:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:32:43.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:32:43 np0005540825 nova_compute[256151]: 2025-12-01 10:32:43.456 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:32:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:32:43.769Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:32:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:32:43.770Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:32:43 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1266: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:32:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:32:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:32:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:32:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:44 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:32:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:32:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:32:45.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:32:45 np0005540825 podman[292066]: 2025-12-01 10:32:45.19099054 +0000 UTC m=+0.061002866 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  1 05:32:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:32:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:32:45.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:32:45 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1267: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:32:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:32:46 np0005540825 nova_compute[256151]: 2025-12-01 10:32:46.553 256155 DEBUG oslo_concurrency.processutils [None req-64cac02e-2179-4e9c-a452-97dadcc3883d 8f40188af6da43f2a935c6c0b2de642b 9a5734898a6345909986f17ddf57b27d - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:32:46 np0005540825 nova_compute[256151]: 2025-12-01 10:32:46.575 256155 DEBUG oslo_concurrency.processutils [None req-64cac02e-2179-4e9c-a452-97dadcc3883d 8f40188af6da43f2a935c6c0b2de642b 9a5734898a6345909986f17ddf57b27d - - default default] CMD "env LANG=C uptime" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:32:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:32:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:32:47.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:32:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:32:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:32:47.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:32:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:32:47.391Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:32:47 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1268: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:32:48 np0005540825 nova_compute[256151]: 2025-12-01 10:32:48.225 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:32:48 np0005540825 podman[292092]: 2025-12-01 10:32:48.229064985 +0000 UTC m=+0.084792688 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  1 05:32:48 np0005540825 nova_compute[256151]: 2025-12-01 10:32:48.458 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:32:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:32:48.903Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:32:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:32:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:32:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:32:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:49 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:32:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:32:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:32:49.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:32:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:32:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:32:49.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:32:49 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1269: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:32:50 np0005540825 nova_compute[256151]: 2025-12-01 10:32:50.522 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:32:50 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:32:50.523 163291 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '36:10:da', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '4e:5c:35:98:90:37'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 05:32:50 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:32:50.524 163291 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 05:32:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:32:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:32:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:32:51.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:32:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:32:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:32:51.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:32:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:32:51] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:32:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:32:51] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:32:51 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1270: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:32:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:32:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:32:53.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:32:53 np0005540825 nova_compute[256151]: 2025-12-01 10:32:53.227 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:32:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:32:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:32:53.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:32:53 np0005540825 nova_compute[256151]: 2025-12-01 10:32:53.460 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:32:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:32:53.771Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:32:53 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1271: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:32:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:32:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:32:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:32:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:54 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:32:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:32:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:32:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:32:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:32:55.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:32:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:32:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:32:55.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:32:55 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:32:55.526 163291 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4d9738cf-2abf-48e2-9303-677669784912, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 05:32:55 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1272: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:32:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:32:57 np0005540825 nova_compute[256151]: 2025-12-01 10:32:57.028 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:32:57 np0005540825 nova_compute[256151]: 2025-12-01 10:32:57.028 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 05:32:57 np0005540825 nova_compute[256151]: 2025-12-01 10:32:57.029 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 05:32:57 np0005540825 nova_compute[256151]: 2025-12-01 10:32:57.047 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 05:32:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:32:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:32:57.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:32:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:32:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:32:57.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:32:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:32:57.392Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:32:57 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1273: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:32:58 np0005540825 nova_compute[256151]: 2025-12-01 10:32:58.230 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:32:58 np0005540825 nova_compute[256151]: 2025-12-01 10:32:58.463 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:32:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:32:58.905Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:32:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:32:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:32:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:59 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:32:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:32:59 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:32:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:32:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:32:59.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:32:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:32:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:32:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:32:59.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:32:59 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1274: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:32:59 np0005540825 podman[292198]: 2025-12-01 10:32:59.944406551 +0000 UTC m=+0.104307908 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Dec  1 05:33:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:33:00 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:33:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 05:33:00 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:33:00 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1275: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 578 B/s rd, 0 op/s
Dec  1 05:33:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 05:33:00 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:33:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 05:33:00 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:33:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 05:33:00 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 05:33:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 05:33:00 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:33:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:33:00 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:33:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:33:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:33:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:33:01.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:33:01 np0005540825 podman[292350]: 2025-12-01 10:33:01.067906977 +0000 UTC m=+0.028998413 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:33:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:33:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:33:01.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:33:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:33:01] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec  1 05:33:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:33:01] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec  1 05:33:01 np0005540825 podman[292350]: 2025-12-01 10:33:01.657037835 +0000 UTC m=+0.618129241 container create c44cef1528ddc99a10513ea013f8922a8ab27d480a87f509b22c240d5f2afa69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_euler, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:33:01 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:33:01 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:33:01 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:33:01 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:33:01 np0005540825 systemd[1]: Started libpod-conmon-c44cef1528ddc99a10513ea013f8922a8ab27d480a87f509b22c240d5f2afa69.scope.
Dec  1 05:33:01 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:33:01 np0005540825 podman[292350]: 2025-12-01 10:33:01.774017249 +0000 UTC m=+0.735108645 container init c44cef1528ddc99a10513ea013f8922a8ab27d480a87f509b22c240d5f2afa69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_euler, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 05:33:01 np0005540825 podman[292350]: 2025-12-01 10:33:01.787724684 +0000 UTC m=+0.748816080 container start c44cef1528ddc99a10513ea013f8922a8ab27d480a87f509b22c240d5f2afa69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_euler, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  1 05:33:01 np0005540825 podman[292350]: 2025-12-01 10:33:01.791183317 +0000 UTC m=+0.752274743 container attach c44cef1528ddc99a10513ea013f8922a8ab27d480a87f509b22c240d5f2afa69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:33:01 np0005540825 compassionate_euler[292366]: 167 167
Dec  1 05:33:01 np0005540825 systemd[1]: libpod-c44cef1528ddc99a10513ea013f8922a8ab27d480a87f509b22c240d5f2afa69.scope: Deactivated successfully.
Dec  1 05:33:01 np0005540825 podman[292350]: 2025-12-01 10:33:01.796133528 +0000 UTC m=+0.757224944 container died c44cef1528ddc99a10513ea013f8922a8ab27d480a87f509b22c240d5f2afa69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_euler, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec  1 05:33:01 np0005540825 systemd[1]: var-lib-containers-storage-overlay-beed853a9be8f1511a01108be5f4ff11fef77f74888f7cd799dd99be90e82519-merged.mount: Deactivated successfully.
Dec  1 05:33:01 np0005540825 podman[292350]: 2025-12-01 10:33:01.841519897 +0000 UTC m=+0.802611293 container remove c44cef1528ddc99a10513ea013f8922a8ab27d480a87f509b22c240d5f2afa69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_euler, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  1 05:33:01 np0005540825 systemd[1]: libpod-conmon-c44cef1528ddc99a10513ea013f8922a8ab27d480a87f509b22c240d5f2afa69.scope: Deactivated successfully.
Dec  1 05:33:02 np0005540825 podman[292392]: 2025-12-01 10:33:02.021172861 +0000 UTC m=+0.052414857 container create 12a4d937d4dbd881392f1e461243f1ac6e8eb1c147d89f57b260cfb1e6ba7fe4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_meitner, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:33:02 np0005540825 nova_compute[256151]: 2025-12-01 10:33:02.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:33:02 np0005540825 systemd[1]: Started libpod-conmon-12a4d937d4dbd881392f1e461243f1ac6e8eb1c147d89f57b260cfb1e6ba7fe4.scope.
Dec  1 05:33:02 np0005540825 podman[292392]: 2025-12-01 10:33:01.998878637 +0000 UTC m=+0.030120643 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:33:02 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:33:02 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c584211a6f2e94e204df1d49ec3a7194b6d3b1755c1bbc790a88ac6e05d5e6b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:33:02 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c584211a6f2e94e204df1d49ec3a7194b6d3b1755c1bbc790a88ac6e05d5e6b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:33:02 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c584211a6f2e94e204df1d49ec3a7194b6d3b1755c1bbc790a88ac6e05d5e6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:33:02 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c584211a6f2e94e204df1d49ec3a7194b6d3b1755c1bbc790a88ac6e05d5e6b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:33:02 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c584211a6f2e94e204df1d49ec3a7194b6d3b1755c1bbc790a88ac6e05d5e6b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:33:02 np0005540825 podman[292392]: 2025-12-01 10:33:02.133417098 +0000 UTC m=+0.164659164 container init 12a4d937d4dbd881392f1e461243f1ac6e8eb1c147d89f57b260cfb1e6ba7fe4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_meitner, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Dec  1 05:33:02 np0005540825 podman[292392]: 2025-12-01 10:33:02.146384674 +0000 UTC m=+0.177626670 container start 12a4d937d4dbd881392f1e461243f1ac6e8eb1c147d89f57b260cfb1e6ba7fe4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid)
Dec  1 05:33:02 np0005540825 podman[292392]: 2025-12-01 10:33:02.149686722 +0000 UTC m=+0.180928768 container attach 12a4d937d4dbd881392f1e461243f1ac6e8eb1c147d89f57b260cfb1e6ba7fe4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_meitner, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  1 05:33:02 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1276: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 578 B/s rd, 0 op/s
Dec  1 05:33:02 np0005540825 happy_meitner[292409]: --> passed data devices: 0 physical, 1 LVM
Dec  1 05:33:02 np0005540825 happy_meitner[292409]: --> All data devices are unavailable
Dec  1 05:33:02 np0005540825 systemd[1]: libpod-12a4d937d4dbd881392f1e461243f1ac6e8eb1c147d89f57b260cfb1e6ba7fe4.scope: Deactivated successfully.
Dec  1 05:33:02 np0005540825 conmon[292409]: conmon 12a4d937d4dbd881392f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-12a4d937d4dbd881392f1e461243f1ac6e8eb1c147d89f57b260cfb1e6ba7fe4.scope/container/memory.events
Dec  1 05:33:02 np0005540825 podman[292392]: 2025-12-01 10:33:02.525061877 +0000 UTC m=+0.556303913 container died 12a4d937d4dbd881392f1e461243f1ac6e8eb1c147d89f57b260cfb1e6ba7fe4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_meitner, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:33:02 np0005540825 systemd[1]: var-lib-containers-storage-overlay-6c584211a6f2e94e204df1d49ec3a7194b6d3b1755c1bbc790a88ac6e05d5e6b-merged.mount: Deactivated successfully.
Dec  1 05:33:02 np0005540825 podman[292392]: 2025-12-01 10:33:02.58265461 +0000 UTC m=+0.613896596 container remove 12a4d937d4dbd881392f1e461243f1ac6e8eb1c147d89f57b260cfb1e6ba7fe4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  1 05:33:02 np0005540825 systemd[1]: libpod-conmon-12a4d937d4dbd881392f1e461243f1ac6e8eb1c147d89f57b260cfb1e6ba7fe4.scope: Deactivated successfully.
Dec  1 05:33:03 np0005540825 nova_compute[256151]: 2025-12-01 10:33:03.022 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:33:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:33:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:33:03.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:33:03 np0005540825 nova_compute[256151]: 2025-12-01 10:33:03.232 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:33:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:33:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:33:03.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:33:03 np0005540825 podman[292531]: 2025-12-01 10:33:03.30145854 +0000 UTC m=+0.038541507 container create dc559db17d9dc4195f56f5f0c6d5749991b0800ebc26904c8ca9c3034b9a4ed1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_goodall, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True)
Dec  1 05:33:03 np0005540825 systemd[1]: Started libpod-conmon-dc559db17d9dc4195f56f5f0c6d5749991b0800ebc26904c8ca9c3034b9a4ed1.scope.
Dec  1 05:33:03 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:33:03 np0005540825 podman[292531]: 2025-12-01 10:33:03.283554514 +0000 UTC m=+0.020637501 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:33:03 np0005540825 podman[292531]: 2025-12-01 10:33:03.379770626 +0000 UTC m=+0.116853643 container init dc559db17d9dc4195f56f5f0c6d5749991b0800ebc26904c8ca9c3034b9a4ed1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_goodall, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:33:03 np0005540825 podman[292531]: 2025-12-01 10:33:03.387142022 +0000 UTC m=+0.124224989 container start dc559db17d9dc4195f56f5f0c6d5749991b0800ebc26904c8ca9c3034b9a4ed1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_goodall, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  1 05:33:03 np0005540825 podman[292531]: 2025-12-01 10:33:03.390669876 +0000 UTC m=+0.127752873 container attach dc559db17d9dc4195f56f5f0c6d5749991b0800ebc26904c8ca9c3034b9a4ed1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_goodall, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec  1 05:33:03 np0005540825 hardcore_goodall[292547]: 167 167
Dec  1 05:33:03 np0005540825 systemd[1]: libpod-dc559db17d9dc4195f56f5f0c6d5749991b0800ebc26904c8ca9c3034b9a4ed1.scope: Deactivated successfully.
Dec  1 05:33:03 np0005540825 podman[292531]: 2025-12-01 10:33:03.394681323 +0000 UTC m=+0.131764300 container died dc559db17d9dc4195f56f5f0c6d5749991b0800ebc26904c8ca9c3034b9a4ed1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  1 05:33:03 np0005540825 systemd[1]: var-lib-containers-storage-overlay-d7fcf52a852be2d702bac15e9c21555ac2b0b67da6530e6356141ebfebdeb5af-merged.mount: Deactivated successfully.
Dec  1 05:33:03 np0005540825 podman[292531]: 2025-12-01 10:33:03.433222609 +0000 UTC m=+0.170305596 container remove dc559db17d9dc4195f56f5f0c6d5749991b0800ebc26904c8ca9c3034b9a4ed1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_goodall, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:33:03 np0005540825 systemd[1]: libpod-conmon-dc559db17d9dc4195f56f5f0c6d5749991b0800ebc26904c8ca9c3034b9a4ed1.scope: Deactivated successfully.
Dec  1 05:33:03 np0005540825 nova_compute[256151]: 2025-12-01 10:33:03.464 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:33:03 np0005540825 podman[292572]: 2025-12-01 10:33:03.634657843 +0000 UTC m=+0.046702165 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:33:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:33:03.772Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:33:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:33:03.773Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:33:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:33:03.773Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:33:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:33:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:33:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:04 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:33:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:04 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:33:04 np0005540825 podman[292572]: 2025-12-01 10:33:04.032943078 +0000 UTC m=+0.444987340 container create 87b7092f580012e9ba3292ac64f3f769bfab23e6f0e6a5f99b699389226cbd37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_booth, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True)
Dec  1 05:33:04 np0005540825 systemd[1]: Started libpod-conmon-87b7092f580012e9ba3292ac64f3f769bfab23e6f0e6a5f99b699389226cbd37.scope.
Dec  1 05:33:04 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:33:04 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d5ac7431cffa9811a298d99d3240b26511fe6019412339f1004e469e82b000b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:33:04 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d5ac7431cffa9811a298d99d3240b26511fe6019412339f1004e469e82b000b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:33:04 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d5ac7431cffa9811a298d99d3240b26511fe6019412339f1004e469e82b000b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:33:04 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d5ac7431cffa9811a298d99d3240b26511fe6019412339f1004e469e82b000b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:33:04 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1277: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 578 B/s rd, 0 op/s
Dec  1 05:33:04 np0005540825 podman[292572]: 2025-12-01 10:33:04.58621215 +0000 UTC m=+0.998256462 container init 87b7092f580012e9ba3292ac64f3f769bfab23e6f0e6a5f99b699389226cbd37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:33:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:33:04.591 163291 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:33:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:33:04.592 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:33:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:33:04.592 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:33:04 np0005540825 podman[292572]: 2025-12-01 10:33:04.598711423 +0000 UTC m=+1.010755655 container start 87b7092f580012e9ba3292ac64f3f769bfab23e6f0e6a5f99b699389226cbd37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_booth, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec  1 05:33:04 np0005540825 podman[292572]: 2025-12-01 10:33:04.687146038 +0000 UTC m=+1.099190310 container attach 87b7092f580012e9ba3292ac64f3f769bfab23e6f0e6a5f99b699389226cbd37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:33:04 np0005540825 romantic_booth[292589]: {
Dec  1 05:33:04 np0005540825 romantic_booth[292589]:    "1": [
Dec  1 05:33:04 np0005540825 romantic_booth[292589]:        {
Dec  1 05:33:04 np0005540825 romantic_booth[292589]:            "devices": [
Dec  1 05:33:04 np0005540825 romantic_booth[292589]:                "/dev/loop3"
Dec  1 05:33:04 np0005540825 romantic_booth[292589]:            ],
Dec  1 05:33:04 np0005540825 romantic_booth[292589]:            "lv_name": "ceph_lv0",
Dec  1 05:33:04 np0005540825 romantic_booth[292589]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:33:04 np0005540825 romantic_booth[292589]:            "lv_size": "21470642176",
Dec  1 05:33:04 np0005540825 romantic_booth[292589]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 05:33:04 np0005540825 romantic_booth[292589]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:33:04 np0005540825 romantic_booth[292589]:            "name": "ceph_lv0",
Dec  1 05:33:04 np0005540825 romantic_booth[292589]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:33:04 np0005540825 romantic_booth[292589]:            "tags": {
Dec  1 05:33:04 np0005540825 romantic_booth[292589]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:33:04 np0005540825 romantic_booth[292589]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:33:04 np0005540825 romantic_booth[292589]:                "ceph.cephx_lockbox_secret": "",
Dec  1 05:33:04 np0005540825 romantic_booth[292589]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 05:33:04 np0005540825 romantic_booth[292589]:                "ceph.cluster_name": "ceph",
Dec  1 05:33:04 np0005540825 romantic_booth[292589]:                "ceph.crush_device_class": "",
Dec  1 05:33:04 np0005540825 romantic_booth[292589]:                "ceph.encrypted": "0",
Dec  1 05:33:04 np0005540825 romantic_booth[292589]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 05:33:04 np0005540825 romantic_booth[292589]:                "ceph.osd_id": "1",
Dec  1 05:33:04 np0005540825 romantic_booth[292589]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 05:33:04 np0005540825 romantic_booth[292589]:                "ceph.type": "block",
Dec  1 05:33:04 np0005540825 romantic_booth[292589]:                "ceph.vdo": "0",
Dec  1 05:33:04 np0005540825 romantic_booth[292589]:                "ceph.with_tpm": "0"
Dec  1 05:33:04 np0005540825 romantic_booth[292589]:            },
Dec  1 05:33:04 np0005540825 romantic_booth[292589]:            "type": "block",
Dec  1 05:33:04 np0005540825 romantic_booth[292589]:            "vg_name": "ceph_vg0"
Dec  1 05:33:04 np0005540825 romantic_booth[292589]:        }
Dec  1 05:33:04 np0005540825 romantic_booth[292589]:    ]
Dec  1 05:33:04 np0005540825 romantic_booth[292589]: }
Dec  1 05:33:04 np0005540825 systemd[1]: libpod-87b7092f580012e9ba3292ac64f3f769bfab23e6f0e6a5f99b699389226cbd37.scope: Deactivated successfully.
Dec  1 05:33:04 np0005540825 podman[292572]: 2025-12-01 10:33:04.928223587 +0000 UTC m=+1.340267849 container died 87b7092f580012e9ba3292ac64f3f769bfab23e6f0e6a5f99b699389226cbd37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_booth, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:33:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:33:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:33:05.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:33:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:33:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:33:05.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:33:05 np0005540825 systemd[1]: var-lib-containers-storage-overlay-9d5ac7431cffa9811a298d99d3240b26511fe6019412339f1004e469e82b000b-merged.mount: Deactivated successfully.
Dec  1 05:33:05 np0005540825 podman[292572]: 2025-12-01 10:33:05.680049746 +0000 UTC m=+2.092094018 container remove 87b7092f580012e9ba3292ac64f3f769bfab23e6f0e6a5f99b699389226cbd37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_booth, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  1 05:33:05 np0005540825 systemd[1]: libpod-conmon-87b7092f580012e9ba3292ac64f3f769bfab23e6f0e6a5f99b699389226cbd37.scope: Deactivated successfully.
Dec  1 05:33:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:33:06 np0005540825 nova_compute[256151]: 2025-12-01 10:33:06.026 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:33:06 np0005540825 nova_compute[256151]: 2025-12-01 10:33:06.026 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:33:06 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1278: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 578 B/s rd, 0 op/s
Dec  1 05:33:06 np0005540825 podman[292709]: 2025-12-01 10:33:06.338692844 +0000 UTC m=+0.032015463 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:33:06 np0005540825 podman[292709]: 2025-12-01 10:33:06.522891049 +0000 UTC m=+0.216213658 container create 5c440f7f41d01b3f88bc236663a961a3567cb64b2eeb14383efd626cd38a554c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:33:06 np0005540825 systemd[1]: Started libpod-conmon-5c440f7f41d01b3f88bc236663a961a3567cb64b2eeb14383efd626cd38a554c.scope.
Dec  1 05:33:06 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:33:06 np0005540825 podman[292709]: 2025-12-01 10:33:06.629620971 +0000 UTC m=+0.322943610 container init 5c440f7f41d01b3f88bc236663a961a3567cb64b2eeb14383efd626cd38a554c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_euler, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  1 05:33:06 np0005540825 podman[292709]: 2025-12-01 10:33:06.639052862 +0000 UTC m=+0.332375461 container start 5c440f7f41d01b3f88bc236663a961a3567cb64b2eeb14383efd626cd38a554c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  1 05:33:06 np0005540825 podman[292709]: 2025-12-01 10:33:06.642530194 +0000 UTC m=+0.335852853 container attach 5c440f7f41d01b3f88bc236663a961a3567cb64b2eeb14383efd626cd38a554c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_euler, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:33:06 np0005540825 systemd[1]: libpod-5c440f7f41d01b3f88bc236663a961a3567cb64b2eeb14383efd626cd38a554c.scope: Deactivated successfully.
Dec  1 05:33:06 np0005540825 priceless_euler[292725]: 167 167
Dec  1 05:33:06 np0005540825 conmon[292725]: conmon 5c440f7f41d01b3f88bc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5c440f7f41d01b3f88bc236663a961a3567cb64b2eeb14383efd626cd38a554c.scope/container/memory.events
Dec  1 05:33:06 np0005540825 podman[292709]: 2025-12-01 10:33:06.647070115 +0000 UTC m=+0.340392744 container died 5c440f7f41d01b3f88bc236663a961a3567cb64b2eeb14383efd626cd38a554c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_euler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True)
Dec  1 05:33:06 np0005540825 systemd[1]: var-lib-containers-storage-overlay-aac588c1e29131e77b2b08c2b86a680ea3ef7bb563698aab86494e3213355b00-merged.mount: Deactivated successfully.
Dec  1 05:33:06 np0005540825 podman[292709]: 2025-12-01 10:33:06.691117348 +0000 UTC m=+0.384439937 container remove 5c440f7f41d01b3f88bc236663a961a3567cb64b2eeb14383efd626cd38a554c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_euler, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  1 05:33:06 np0005540825 systemd[1]: libpod-conmon-5c440f7f41d01b3f88bc236663a961a3567cb64b2eeb14383efd626cd38a554c.scope: Deactivated successfully.
Dec  1 05:33:06 np0005540825 podman[292748]: 2025-12-01 10:33:06.896025284 +0000 UTC m=+0.058423026 container create 8f931118a86cb32f998d8058023cfea2d064f7fe954c97d9b39973919b66d6d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec  1 05:33:06 np0005540825 systemd[1]: Started libpod-conmon-8f931118a86cb32f998d8058023cfea2d064f7fe954c97d9b39973919b66d6d3.scope.
Dec  1 05:33:06 np0005540825 podman[292748]: 2025-12-01 10:33:06.876504585 +0000 UTC m=+0.038902347 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:33:06 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:33:06 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d9bc85d725cc42d9053f09e930c4c6085dd6911aac4ab501d4948b681ca467/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:33:06 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d9bc85d725cc42d9053f09e930c4c6085dd6911aac4ab501d4948b681ca467/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:33:06 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d9bc85d725cc42d9053f09e930c4c6085dd6911aac4ab501d4948b681ca467/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:33:06 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d9bc85d725cc42d9053f09e930c4c6085dd6911aac4ab501d4948b681ca467/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:33:07 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:07 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:33:07 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:33:07.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:33:07 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  1 05:33:07 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/457096563' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  1 05:33:07 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  1 05:33:07 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/457096563' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  1 05:33:07 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:07 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:33:07 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:33:07.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:33:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:33:07.393Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:33:07 np0005540825 podman[292748]: 2025-12-01 10:33:07.459200951 +0000 UTC m=+0.621598763 container init 8f931118a86cb32f998d8058023cfea2d064f7fe954c97d9b39973919b66d6d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  1 05:33:07 np0005540825 podman[292748]: 2025-12-01 10:33:07.467510982 +0000 UTC m=+0.629908724 container start 8f931118a86cb32f998d8058023cfea2d064f7fe954c97d9b39973919b66d6d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_turing, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:33:07 np0005540825 podman[292748]: 2025-12-01 10:33:07.924603843 +0000 UTC m=+1.087001605 container attach 8f931118a86cb32f998d8058023cfea2d064f7fe954c97d9b39973919b66d6d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_turing, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  1 05:33:08 np0005540825 nova_compute[256151]: 2025-12-01 10:33:08.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:33:08 np0005540825 nova_compute[256151]: 2025-12-01 10:33:08.027 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 05:33:08 np0005540825 lvm[292841]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 05:33:08 np0005540825 lvm[292841]: VG ceph_vg0 finished
Dec  1 05:33:08 np0005540825 sweet_turing[292765]: {}
Dec  1 05:33:08 np0005540825 systemd[1]: libpod-8f931118a86cb32f998d8058023cfea2d064f7fe954c97d9b39973919b66d6d3.scope: Deactivated successfully.
Dec  1 05:33:08 np0005540825 podman[292748]: 2025-12-01 10:33:08.177478357 +0000 UTC m=+1.339876119 container died 8f931118a86cb32f998d8058023cfea2d064f7fe954c97d9b39973919b66d6d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_turing, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  1 05:33:08 np0005540825 systemd[1]: libpod-8f931118a86cb32f998d8058023cfea2d064f7fe954c97d9b39973919b66d6d3.scope: Consumed 1.074s CPU time.
Dec  1 05:33:08 np0005540825 systemd[1]: var-lib-containers-storage-overlay-12d9bc85d725cc42d9053f09e930c4c6085dd6911aac4ab501d4948b681ca467-merged.mount: Deactivated successfully.
Dec  1 05:33:08 np0005540825 podman[292748]: 2025-12-01 10:33:08.221655633 +0000 UTC m=+1.384053385 container remove 8f931118a86cb32f998d8058023cfea2d064f7fe954c97d9b39973919b66d6d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:33:08 np0005540825 systemd[1]: libpod-conmon-8f931118a86cb32f998d8058023cfea2d064f7fe954c97d9b39973919b66d6d3.scope: Deactivated successfully.
Dec  1 05:33:08 np0005540825 nova_compute[256151]: 2025-12-01 10:33:08.234 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:33:08 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 05:33:08 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:33:08 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 05:33:08 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:33:08 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1279: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 578 B/s rd, 0 op/s
Dec  1 05:33:08 np0005540825 nova_compute[256151]: 2025-12-01 10:33:08.466 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:33:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:33:08.906Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:33:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:33:08.906Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:33:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:09 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:33:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:09 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:33:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:09 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:33:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:09 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:33:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:33:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:33:09.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:33:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.002000053s ======
Dec  1 05:33:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:33:09.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec  1 05:33:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=grafana.update.checker t=2025-12-01T10:33:09.335162602Z level=info msg="Update check succeeded" duration=55.659293ms
Dec  1 05:33:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=plugins.update.checker t=2025-12-01T10:33:09.341426408Z level=info msg="Update check succeeded" duration=69.48691ms
Dec  1 05:33:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:33:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:33:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:33:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:33:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:33:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:33:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:33:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:33:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0[105677]: logger=cleanup t=2025-12-01T10:33:09.748778235Z level=info msg="Completed cleanup jobs" duration=541.010665ms
Dec  1 05:33:10 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1280: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 578 B/s rd, 0 op/s
Dec  1 05:33:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:33:11 np0005540825 nova_compute[256151]: 2025-12-01 10:33:11.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:33:11 np0005540825 nova_compute[256151]: 2025-12-01 10:33:11.054 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:33:11 np0005540825 nova_compute[256151]: 2025-12-01 10:33:11.055 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:33:11 np0005540825 nova_compute[256151]: 2025-12-01 10:33:11.055 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:33:11 np0005540825 nova_compute[256151]: 2025-12-01 10:33:11.055 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 05:33:11 np0005540825 nova_compute[256151]: 2025-12-01 10:33:11.056 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:33:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:33:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:33:11.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:33:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:33:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:33:11.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:33:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:33:11] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Dec  1 05:33:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:33:11] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Dec  1 05:33:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:33:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/530923376' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:33:11 np0005540825 nova_compute[256151]: 2025-12-01 10:33:11.548 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:33:11 np0005540825 nova_compute[256151]: 2025-12-01 10:33:11.707 256155 WARNING nova.virt.libvirt.driver [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 05:33:11 np0005540825 nova_compute[256151]: 2025-12-01 10:33:11.708 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4467MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 05:33:11 np0005540825 nova_compute[256151]: 2025-12-01 10:33:11.709 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:33:11 np0005540825 nova_compute[256151]: 2025-12-01 10:33:11.709 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:33:11 np0005540825 nova_compute[256151]: 2025-12-01 10:33:11.765 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 05:33:11 np0005540825 nova_compute[256151]: 2025-12-01 10:33:11.765 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 05:33:11 np0005540825 nova_compute[256151]: 2025-12-01 10:33:11.781 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:33:12 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:33:12 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:33:12 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:33:12 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/113769782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:33:12 np0005540825 nova_compute[256151]: 2025-12-01 10:33:12.269 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:33:12 np0005540825 nova_compute[256151]: 2025-12-01 10:33:12.276 256155 DEBUG nova.compute.provider_tree [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed in ProviderTree for provider: 5efe20fe-1981-4bd9-8786-d9fddc89a5ae update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 05:33:12 np0005540825 nova_compute[256151]: 2025-12-01 10:33:12.299 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 05:33:12 np0005540825 nova_compute[256151]: 2025-12-01 10:33:12.300 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 05:33:12 np0005540825 nova_compute[256151]: 2025-12-01 10:33:12.301 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:33:12 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1281: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:33:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:33:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:33:13.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:33:13 np0005540825 nova_compute[256151]: 2025-12-01 10:33:13.236 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:33:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:33:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:33:13.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:33:13 np0005540825 nova_compute[256151]: 2025-12-01 10:33:13.468 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:33:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:33:13.774Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:33:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:33:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:33:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:33:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:14 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:33:14 np0005540825 nova_compute[256151]: 2025-12-01 10:33:14.302 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:33:14 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1282: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:33:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:33:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:33:15.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:33:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:33:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:33:15.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:33:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:33:16 np0005540825 nova_compute[256151]: 2025-12-01 10:33:16.028 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:33:16 np0005540825 podman[292958]: 2025-12-01 10:33:16.249679668 +0000 UTC m=+0.097570349 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 05:33:16 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1283: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:33:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:33:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:33:17.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:33:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:33:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:33:17.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:33:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:33:17.394Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:33:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:33:17.395Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:33:18 np0005540825 nova_compute[256151]: 2025-12-01 10:33:18.240 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:33:18 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1284: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:33:18 np0005540825 nova_compute[256151]: 2025-12-01 10:33:18.470 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:33:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:33:18.907Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:33:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:33:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:33:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:33:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:19 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:33:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:33:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:33:19.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:33:19 np0005540825 podman[292980]: 2025-12-01 10:33:19.207525748 +0000 UTC m=+0.076872178 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  1 05:33:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:33:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:33:19.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:33:20 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1285: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:33:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:33:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:33:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:33:21.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:33:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:33:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:33:21.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:33:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:33:21] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Dec  1 05:33:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:33:21] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Dec  1 05:33:22 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1286: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:33:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:33:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:33:23.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:33:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:33:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:33:23.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:33:23 np0005540825 nova_compute[256151]: 2025-12-01 10:33:23.297 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:33:23 np0005540825 nova_compute[256151]: 2025-12-01 10:33:23.472 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:33:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:33:23.775Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:33:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:33:23.776Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:33:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:33:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:33:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:33:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:24 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:33:24 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1287: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:33:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:33:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:33:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:33:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:33:25.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:33:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:33:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:33:25.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:33:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:33:26 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1288: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:33:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:33:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:33:27.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:33:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:33:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:33:27.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:33:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:33:27.396Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:33:28 np0005540825 nova_compute[256151]: 2025-12-01 10:33:28.299 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:33:28 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1289: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:33:28 np0005540825 nova_compute[256151]: 2025-12-01 10:33:28.475 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:33:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:33:28.908Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:33:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:33:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:33:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:29 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:33:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:29 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:33:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:33:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:33:29.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:33:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:33:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:33:29.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:33:30 np0005540825 podman[293036]: 2025-12-01 10:33:30.092600597 +0000 UTC m=+0.088674902 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Dec  1 05:33:30 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1290: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:33:30 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Dec  1 05:33:30 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:33:30.633728) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  1 05:33:30 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Dec  1 05:33:30 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764585210633800, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1632, "num_deletes": 251, "total_data_size": 3244246, "memory_usage": 3288336, "flush_reason": "Manual Compaction"}
Dec  1 05:33:30 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Dec  1 05:33:30 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764585210655438, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 3141933, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35464, "largest_seqno": 37095, "table_properties": {"data_size": 3134277, "index_size": 4599, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15807, "raw_average_key_size": 20, "raw_value_size": 3119073, "raw_average_value_size": 3993, "num_data_blocks": 200, "num_entries": 781, "num_filter_entries": 781, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764585046, "oldest_key_time": 1764585046, "file_creation_time": 1764585210, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Dec  1 05:33:30 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 21747 microseconds, and 11226 cpu microseconds.
Dec  1 05:33:30 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 05:33:30 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:33:30.655484) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 3141933 bytes OK
Dec  1 05:33:30 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:33:30.655503) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Dec  1 05:33:30 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:33:30.657407) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Dec  1 05:33:30 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:33:30.657423) EVENT_LOG_v1 {"time_micros": 1764585210657418, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  1 05:33:30 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:33:30.657441) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  1 05:33:30 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 3237371, prev total WAL file size 3237371, number of live WAL files 2.
Dec  1 05:33:30 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:33:30 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:33:30.658582) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Dec  1 05:33:30 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  1 05:33:30 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(3068KB)], [77(11MB)]
Dec  1 05:33:30 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764585210658676, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 15455876, "oldest_snapshot_seqno": -1}
Dec  1 05:33:30 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6756 keys, 13342734 bytes, temperature: kUnknown
Dec  1 05:33:30 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764585210744629, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 13342734, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13299835, "index_size": 24931, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16901, "raw_key_size": 177380, "raw_average_key_size": 26, "raw_value_size": 13180375, "raw_average_value_size": 1950, "num_data_blocks": 980, "num_entries": 6756, "num_filter_entries": 6756, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764582410, "oldest_key_time": 0, "file_creation_time": 1764585210, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Dec  1 05:33:30 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 05:33:30 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:33:30.744889) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 13342734 bytes
Dec  1 05:33:30 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:33:30.746477) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 179.7 rd, 155.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 11.7 +0.0 blob) out(12.7 +0.0 blob), read-write-amplify(9.2) write-amplify(4.2) OK, records in: 7272, records dropped: 516 output_compression: NoCompression
Dec  1 05:33:30 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:33:30.746501) EVENT_LOG_v1 {"time_micros": 1764585210746490, "job": 44, "event": "compaction_finished", "compaction_time_micros": 86025, "compaction_time_cpu_micros": 51331, "output_level": 6, "num_output_files": 1, "total_output_size": 13342734, "num_input_records": 7272, "num_output_records": 6756, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  1 05:33:30 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:33:30 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764585210747200, "job": 44, "event": "table_file_deletion", "file_number": 79}
Dec  1 05:33:30 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:33:30 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764585210750039, "job": 44, "event": "table_file_deletion", "file_number": 77}
Dec  1 05:33:30 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:33:30.658420) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:33:30 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:33:30.750107) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:33:30 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:33:30.750114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:33:30 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:33:30.750117) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:33:30 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:33:30.750119) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:33:30 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:33:30.750121) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:33:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:33:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:33:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:33:31.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:33:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:33:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:33:31.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:33:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:33:31] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:33:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:33:31] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:33:32 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1291: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:33:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:33:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:33:33.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:33:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:33:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:33:33.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:33:33 np0005540825 nova_compute[256151]: 2025-12-01 10:33:33.348 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:33:33 np0005540825 nova_compute[256151]: 2025-12-01 10:33:33.477 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:33:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:33:33.776Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:33:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:33:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:33:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:33:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:34 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:33:34 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1292: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:33:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:33:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:33:35.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:33:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:33:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:33:35.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:33:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:33:36 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1293: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:33:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:33:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:33:37.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:33:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:33:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:33:37.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:33:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:33:37.397Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:33:38 np0005540825 nova_compute[256151]: 2025-12-01 10:33:38.351 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:33:38 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1294: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:33:38 np0005540825 nova_compute[256151]: 2025-12-01 10:33:38.479 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:33:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:33:38.909Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:33:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:33:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:33:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:39 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:33:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:39 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:33:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:33:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:33:39.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:33:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:33:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:33:39.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:33:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_10:33:39
Dec  1 05:33:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 05:33:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 05:33:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['.rgw.root', '.nfs', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'images', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'vms', 'default.rgw.meta']
Dec  1 05:33:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 05:33:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:33:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:33:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:33:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:33:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:33:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:33:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:33:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:33:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 05:33:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:33:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:33:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:33:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:33:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:33:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:33:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:33:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:33:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:33:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  1 05:33:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:33:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:33:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:33:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:33:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:33:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:33:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:33:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:33:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:33:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:33:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:33:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:33:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:33:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:33:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 05:33:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:33:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 05:33:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:33:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:33:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:33:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:33:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:33:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:33:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:33:40 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1295: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:33:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:33:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:33:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:33:41.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:33:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:33:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:33:41.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:33:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:33:41] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Dec  1 05:33:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:33:41] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Dec  1 05:33:42 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1296: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:33:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:33:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:33:43.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:33:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:33:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:33:43.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:33:43 np0005540825 nova_compute[256151]: 2025-12-01 10:33:43.353 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:33:43 np0005540825 nova_compute[256151]: 2025-12-01 10:33:43.481 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:33:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:33:43.777Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:33:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:33:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:44 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:33:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:44 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:33:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:44 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:33:44 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1297: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:33:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:33:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:33:45.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:33:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:33:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:33:45.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:33:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:33:46 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1298: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:33:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:33:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:33:47.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:33:47 np0005540825 podman[293079]: 2025-12-01 10:33:47.229161545 +0000 UTC m=+0.083170046 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec  1 05:33:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:33:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:33:47.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:33:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:33:47.398Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:33:48 np0005540825 nova_compute[256151]: 2025-12-01 10:33:48.357 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:33:48 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1299: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:33:48 np0005540825 nova_compute[256151]: 2025-12-01 10:33:48.483 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:33:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:33:48.910Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:33:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:33:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:33:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:33:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:49 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:33:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:33:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:33:49.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:33:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:33:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:33:49.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:33:50 np0005540825 podman[293128]: 2025-12-01 10:33:50.156066759 +0000 UTC m=+0.057274606 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec  1 05:33:50 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1300: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:33:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:33:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:33:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:33:51.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:33:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:33:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:33:51.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:33:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:33:51] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Dec  1 05:33:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:33:51] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Dec  1 05:33:52 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1301: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:33:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:33:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:33:53.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:33:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:33:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:33:53.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:33:53 np0005540825 nova_compute[256151]: 2025-12-01 10:33:53.360 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:33:53 np0005540825 nova_compute[256151]: 2025-12-01 10:33:53.484 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:33:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:33:53.778Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:33:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:33:53.778Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:33:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:33:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:33:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:33:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:54 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:33:54 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1302: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:33:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:33:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:33:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:33:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:33:55.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:33:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:33:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:33:55.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:33:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:33:56 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1303: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:33:57 np0005540825 nova_compute[256151]: 2025-12-01 10:33:57.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:33:57 np0005540825 nova_compute[256151]: 2025-12-01 10:33:57.028 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 05:33:57 np0005540825 nova_compute[256151]: 2025-12-01 10:33:57.028 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 05:33:57 np0005540825 nova_compute[256151]: 2025-12-01 10:33:57.044 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 05:33:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:33:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:33:57.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:33:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:33:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:33:57.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:33:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:33:57.399Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:33:58 np0005540825 nova_compute[256151]: 2025-12-01 10:33:58.363 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:33:58 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1304: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:33:58 np0005540825 nova_compute[256151]: 2025-12-01 10:33:58.486 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:33:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:33:58.911Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:33:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:33:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:33:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:33:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:33:59 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:33:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:33:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:33:59.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:33:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:33:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:33:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:33:59.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:34:00 np0005540825 podman[293159]: 2025-12-01 10:34:00.250054274 +0000 UTC m=+0.110905054 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec  1 05:34:00 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1305: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:34:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:34:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:34:01.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:34:01.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:34:01] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:34:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:34:01] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:34:02 np0005540825 nova_compute[256151]: 2025-12-01 10:34:02.026 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:34:02 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1306: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:34:03 np0005540825 nova_compute[256151]: 2025-12-01 10:34:03.022 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:34:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:34:03.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:34:03.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:03 np0005540825 nova_compute[256151]: 2025-12-01 10:34:03.365 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:34:03 np0005540825 nova_compute[256151]: 2025-12-01 10:34:03.488 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:34:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:34:03.779Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:34:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:34:03.779Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:34:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:34:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:34:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:34:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:04 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:34:04 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1307: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:34:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:34:04.592 163291 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:34:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:34:04.592 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:34:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:34:04.592 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:34:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:34:05.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:34:05.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:34:06 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1308: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:34:07 np0005540825 nova_compute[256151]: 2025-12-01 10:34:07.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:34:07 np0005540825 nova_compute[256151]: 2025-12-01 10:34:07.028 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:34:07 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:07 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:34:07 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:34:07.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:34:07 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:07 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:07 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:34:07.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:34:07.400Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:34:08 np0005540825 nova_compute[256151]: 2025-12-01 10:34:08.023 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:34:08 np0005540825 nova_compute[256151]: 2025-12-01 10:34:08.367 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:34:08 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1309: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:34:08 np0005540825 nova_compute[256151]: 2025-12-01 10:34:08.490 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:34:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:34:08.912Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:34:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:34:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:34:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:34:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:09 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:34:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:34:09.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.002000054s ======
Dec  1 05:34:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:34:09.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec  1 05:34:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec  1 05:34:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  1 05:34:09 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  1 05:34:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:34:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:34:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:34:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:34:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:34:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:34:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:34:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:34:10 np0005540825 nova_compute[256151]: 2025-12-01 10:34:10.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:34:10 np0005540825 nova_compute[256151]: 2025-12-01 10:34:10.028 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 05:34:10 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1310: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:34:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  1 05:34:10 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:34:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  1 05:34:10 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:34:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  1 05:34:10 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:34:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  1 05:34:10 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:34:10 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:34:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:34:11.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec  1 05:34:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  1 05:34:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:34:11.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:34:11] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:34:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:34:11] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:34:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec  1 05:34:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  1 05:34:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:34:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:34:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 05:34:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:34:11 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1311: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 837 B/s rd, 0 op/s
Dec  1 05:34:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 05:34:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:34:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 05:34:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:34:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 05:34:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 05:34:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 05:34:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:34:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:34:11 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:34:11 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:34:11 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:34:11 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:34:11 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:34:11 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  1 05:34:11 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  1 05:34:11 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:34:11 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:34:11 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:34:11 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:34:12 np0005540825 nova_compute[256151]: 2025-12-01 10:34:12.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:34:12 np0005540825 podman[293398]: 2025-12-01 10:34:12.121501368 +0000 UTC m=+0.050133606 container create 50b789dbc59f722d596e5cae21b81cc3914a3372c4c4815513c0eff63cfd1bdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_hugle, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:34:12 np0005540825 systemd[1]: Started libpod-conmon-50b789dbc59f722d596e5cae21b81cc3914a3372c4c4815513c0eff63cfd1bdc.scope.
Dec  1 05:34:12 np0005540825 podman[293398]: 2025-12-01 10:34:12.095830074 +0000 UTC m=+0.024462372 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:34:12 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:34:12 np0005540825 podman[293398]: 2025-12-01 10:34:12.230390177 +0000 UTC m=+0.159022445 container init 50b789dbc59f722d596e5cae21b81cc3914a3372c4c4815513c0eff63cfd1bdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_hugle, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  1 05:34:12 np0005540825 podman[293398]: 2025-12-01 10:34:12.240397974 +0000 UTC m=+0.169030222 container start 50b789dbc59f722d596e5cae21b81cc3914a3372c4c4815513c0eff63cfd1bdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_hugle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:34:12 np0005540825 podman[293398]: 2025-12-01 10:34:12.244113773 +0000 UTC m=+0.172746021 container attach 50b789dbc59f722d596e5cae21b81cc3914a3372c4c4815513c0eff63cfd1bdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:34:12 np0005540825 frosty_hugle[293414]: 167 167
Dec  1 05:34:12 np0005540825 systemd[1]: libpod-50b789dbc59f722d596e5cae21b81cc3914a3372c4c4815513c0eff63cfd1bdc.scope: Deactivated successfully.
Dec  1 05:34:12 np0005540825 conmon[293414]: conmon 50b789dbc59f722d596e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-50b789dbc59f722d596e5cae21b81cc3914a3372c4c4815513c0eff63cfd1bdc.scope/container/memory.events
Dec  1 05:34:12 np0005540825 podman[293398]: 2025-12-01 10:34:12.250261636 +0000 UTC m=+0.178893885 container died 50b789dbc59f722d596e5cae21b81cc3914a3372c4c4815513c0eff63cfd1bdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_hugle, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  1 05:34:12 np0005540825 systemd[1]: var-lib-containers-storage-overlay-0cea16cd9a2ae367be12d35ec297f5116bcf70c22c8946e645d27eb56976d410-merged.mount: Deactivated successfully.
Dec  1 05:34:12 np0005540825 podman[293398]: 2025-12-01 10:34:12.292170222 +0000 UTC m=+0.220802470 container remove 50b789dbc59f722d596e5cae21b81cc3914a3372c4c4815513c0eff63cfd1bdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_hugle, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:34:12 np0005540825 systemd[1]: libpod-conmon-50b789dbc59f722d596e5cae21b81cc3914a3372c4c4815513c0eff63cfd1bdc.scope: Deactivated successfully.
Dec  1 05:34:12 np0005540825 podman[293439]: 2025-12-01 10:34:12.528189687 +0000 UTC m=+0.047584758 container create d1d0813e8a3cfb81d86b39f318544bb5ac5d4f310d378dfb0b9e86d7861b29d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:34:12 np0005540825 systemd[1]: Started libpod-conmon-d1d0813e8a3cfb81d86b39f318544bb5ac5d4f310d378dfb0b9e86d7861b29d5.scope.
Dec  1 05:34:12 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:34:12 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2089d9e276152ac74017030d2b3cbf047a783526e8d093c0f4252f3024ba6c09/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:34:12 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2089d9e276152ac74017030d2b3cbf047a783526e8d093c0f4252f3024ba6c09/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:34:12 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2089d9e276152ac74017030d2b3cbf047a783526e8d093c0f4252f3024ba6c09/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:34:12 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2089d9e276152ac74017030d2b3cbf047a783526e8d093c0f4252f3024ba6c09/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:34:12 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2089d9e276152ac74017030d2b3cbf047a783526e8d093c0f4252f3024ba6c09/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:34:12 np0005540825 podman[293439]: 2025-12-01 10:34:12.511545564 +0000 UTC m=+0.030940655 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:34:12 np0005540825 podman[293439]: 2025-12-01 10:34:12.612093801 +0000 UTC m=+0.131488922 container init d1d0813e8a3cfb81d86b39f318544bb5ac5d4f310d378dfb0b9e86d7861b29d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  1 05:34:12 np0005540825 podman[293439]: 2025-12-01 10:34:12.628508488 +0000 UTC m=+0.147903579 container start d1d0813e8a3cfb81d86b39f318544bb5ac5d4f310d378dfb0b9e86d7861b29d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_hofstadter, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec  1 05:34:12 np0005540825 podman[293439]: 2025-12-01 10:34:12.63271539 +0000 UTC m=+0.152110491 container attach d1d0813e8a3cfb81d86b39f318544bb5ac5d4f310d378dfb0b9e86d7861b29d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:34:12 np0005540825 nova_compute[256151]: 2025-12-01 10:34:12.681 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:34:12 np0005540825 nova_compute[256151]: 2025-12-01 10:34:12.683 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:34:12 np0005540825 nova_compute[256151]: 2025-12-01 10:34:12.683 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:34:12 np0005540825 nova_compute[256151]: 2025-12-01 10:34:12.684 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 05:34:12 np0005540825 nova_compute[256151]: 2025-12-01 10:34:12.684 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:34:12 np0005540825 loving_hofstadter[293455]: --> passed data devices: 0 physical, 1 LVM
Dec  1 05:34:12 np0005540825 loving_hofstadter[293455]: --> All data devices are unavailable
Dec  1 05:34:13 np0005540825 systemd[1]: libpod-d1d0813e8a3cfb81d86b39f318544bb5ac5d4f310d378dfb0b9e86d7861b29d5.scope: Deactivated successfully.
Dec  1 05:34:13 np0005540825 podman[293439]: 2025-12-01 10:34:13.016094049 +0000 UTC m=+0.535489160 container died d1d0813e8a3cfb81d86b39f318544bb5ac5d4f310d378dfb0b9e86d7861b29d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_hofstadter, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:34:13 np0005540825 systemd[1]: var-lib-containers-storage-overlay-2089d9e276152ac74017030d2b3cbf047a783526e8d093c0f4252f3024ba6c09-merged.mount: Deactivated successfully.
Dec  1 05:34:13 np0005540825 podman[293439]: 2025-12-01 10:34:13.060116761 +0000 UTC m=+0.579511832 container remove d1d0813e8a3cfb81d86b39f318544bb5ac5d4f310d378dfb0b9e86d7861b29d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:34:13 np0005540825 systemd[1]: libpod-conmon-d1d0813e8a3cfb81d86b39f318544bb5ac5d4f310d378dfb0b9e86d7861b29d5.scope: Deactivated successfully.
Dec  1 05:34:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:34:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:34:13.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:34:13 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:34:13 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4228211347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:34:13 np0005540825 nova_compute[256151]: 2025-12-01 10:34:13.215 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:34:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:34:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:34:13.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:34:13 np0005540825 nova_compute[256151]: 2025-12-01 10:34:13.370 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:34:13 np0005540825 nova_compute[256151]: 2025-12-01 10:34:13.417 256155 WARNING nova.virt.libvirt.driver [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 05:34:13 np0005540825 nova_compute[256151]: 2025-12-01 10:34:13.418 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4448MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 05:34:13 np0005540825 nova_compute[256151]: 2025-12-01 10:34:13.419 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:34:13 np0005540825 nova_compute[256151]: 2025-12-01 10:34:13.419 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:34:13 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1312: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 558 B/s rd, 0 op/s
Dec  1 05:34:13 np0005540825 nova_compute[256151]: 2025-12-01 10:34:13.492 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:34:13 np0005540825 nova_compute[256151]: 2025-12-01 10:34:13.590 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 05:34:13 np0005540825 nova_compute[256151]: 2025-12-01 10:34:13.590 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 05:34:13 np0005540825 nova_compute[256151]: 2025-12-01 10:34:13.616 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:34:13 np0005540825 podman[293592]: 2025-12-01 10:34:13.661770081 +0000 UTC m=+0.037884579 container create a88db9b9140373f96083ee00cbcae0e96c0a44618c0a581013f02d265db68262 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_buck, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:34:13 np0005540825 systemd[1]: Started libpod-conmon-a88db9b9140373f96083ee00cbcae0e96c0a44618c0a581013f02d265db68262.scope.
Dec  1 05:34:13 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:34:13 np0005540825 podman[293592]: 2025-12-01 10:34:13.719165529 +0000 UTC m=+0.095280057 container init a88db9b9140373f96083ee00cbcae0e96c0a44618c0a581013f02d265db68262 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_buck, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:34:13 np0005540825 podman[293592]: 2025-12-01 10:34:13.725261961 +0000 UTC m=+0.101376459 container start a88db9b9140373f96083ee00cbcae0e96c0a44618c0a581013f02d265db68262 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_buck, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:34:13 np0005540825 naughty_buck[293611]: 167 167
Dec  1 05:34:13 np0005540825 podman[293592]: 2025-12-01 10:34:13.730333096 +0000 UTC m=+0.106447604 container attach a88db9b9140373f96083ee00cbcae0e96c0a44618c0a581013f02d265db68262 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_buck, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid)
Dec  1 05:34:13 np0005540825 systemd[1]: libpod-a88db9b9140373f96083ee00cbcae0e96c0a44618c0a581013f02d265db68262.scope: Deactivated successfully.
Dec  1 05:34:13 np0005540825 podman[293592]: 2025-12-01 10:34:13.730832859 +0000 UTC m=+0.106947367 container died a88db9b9140373f96083ee00cbcae0e96c0a44618c0a581013f02d265db68262 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_buck, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  1 05:34:13 np0005540825 podman[293592]: 2025-12-01 10:34:13.645448467 +0000 UTC m=+0.021562995 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:34:13 np0005540825 systemd[1]: var-lib-containers-storage-overlay-96d39c5f5d4ac1efd2ac0505d9736287db5c6b847959014f32d773fe2c64d43e-merged.mount: Deactivated successfully.
Dec  1 05:34:13 np0005540825 podman[293592]: 2025-12-01 10:34:13.77516769 +0000 UTC m=+0.151282198 container remove a88db9b9140373f96083ee00cbcae0e96c0a44618c0a581013f02d265db68262 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_buck, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  1 05:34:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:34:13.780Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:34:13 np0005540825 systemd[1]: libpod-conmon-a88db9b9140373f96083ee00cbcae0e96c0a44618c0a581013f02d265db68262.scope: Deactivated successfully.
Dec  1 05:34:13 np0005540825 podman[293655]: 2025-12-01 10:34:13.968661522 +0000 UTC m=+0.064668923 container create 04c97f7d2c38a239aa11ae05364bf32d7440fa4828e2a582200b41d5d5d1c788 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_leavitt, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:34:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:34:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:14 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:34:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:14 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:34:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:14 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:34:14 np0005540825 systemd[1]: Started libpod-conmon-04c97f7d2c38a239aa11ae05364bf32d7440fa4828e2a582200b41d5d5d1c788.scope.
Dec  1 05:34:14 np0005540825 podman[293655]: 2025-12-01 10:34:13.943535353 +0000 UTC m=+0.039542774 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:34:14 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:34:14 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be607ff61ac81a3d5aa993922c67faf091d25c1acdba78036341234d859da8c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:34:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:34:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1574029017' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:34:14 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be607ff61ac81a3d5aa993922c67faf091d25c1acdba78036341234d859da8c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:34:14 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be607ff61ac81a3d5aa993922c67faf091d25c1acdba78036341234d859da8c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:34:14 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be607ff61ac81a3d5aa993922c67faf091d25c1acdba78036341234d859da8c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:34:14 np0005540825 podman[293655]: 2025-12-01 10:34:14.083549791 +0000 UTC m=+0.179557232 container init 04c97f7d2c38a239aa11ae05364bf32d7440fa4828e2a582200b41d5d5d1c788 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  1 05:34:14 np0005540825 nova_compute[256151]: 2025-12-01 10:34:14.084 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:34:14 np0005540825 podman[293655]: 2025-12-01 10:34:14.096841655 +0000 UTC m=+0.192849056 container start 04c97f7d2c38a239aa11ae05364bf32d7440fa4828e2a582200b41d5d5d1c788 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  1 05:34:14 np0005540825 nova_compute[256151]: 2025-12-01 10:34:14.098 256155 DEBUG nova.compute.provider_tree [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed in ProviderTree for provider: 5efe20fe-1981-4bd9-8786-d9fddc89a5ae update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 05:34:14 np0005540825 podman[293655]: 2025-12-01 10:34:14.10227011 +0000 UTC m=+0.198277531 container attach 04c97f7d2c38a239aa11ae05364bf32d7440fa4828e2a582200b41d5d5d1c788 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  1 05:34:14 np0005540825 nova_compute[256151]: 2025-12-01 10:34:14.116 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 05:34:14 np0005540825 nova_compute[256151]: 2025-12-01 10:34:14.118 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 05:34:14 np0005540825 nova_compute[256151]: 2025-12-01 10:34:14.118 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:34:14 np0005540825 serene_leavitt[293672]: {
Dec  1 05:34:14 np0005540825 serene_leavitt[293672]:    "1": [
Dec  1 05:34:14 np0005540825 serene_leavitt[293672]:        {
Dec  1 05:34:14 np0005540825 serene_leavitt[293672]:            "devices": [
Dec  1 05:34:14 np0005540825 serene_leavitt[293672]:                "/dev/loop3"
Dec  1 05:34:14 np0005540825 serene_leavitt[293672]:            ],
Dec  1 05:34:14 np0005540825 serene_leavitt[293672]:            "lv_name": "ceph_lv0",
Dec  1 05:34:14 np0005540825 serene_leavitt[293672]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:34:14 np0005540825 serene_leavitt[293672]:            "lv_size": "21470642176",
Dec  1 05:34:14 np0005540825 serene_leavitt[293672]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 05:34:14 np0005540825 serene_leavitt[293672]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:34:14 np0005540825 serene_leavitt[293672]:            "name": "ceph_lv0",
Dec  1 05:34:14 np0005540825 serene_leavitt[293672]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:34:14 np0005540825 serene_leavitt[293672]:            "tags": {
Dec  1 05:34:14 np0005540825 serene_leavitt[293672]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:34:14 np0005540825 serene_leavitt[293672]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:34:14 np0005540825 serene_leavitt[293672]:                "ceph.cephx_lockbox_secret": "",
Dec  1 05:34:14 np0005540825 serene_leavitt[293672]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 05:34:14 np0005540825 serene_leavitt[293672]:                "ceph.cluster_name": "ceph",
Dec  1 05:34:14 np0005540825 serene_leavitt[293672]:                "ceph.crush_device_class": "",
Dec  1 05:34:14 np0005540825 serene_leavitt[293672]:                "ceph.encrypted": "0",
Dec  1 05:34:14 np0005540825 serene_leavitt[293672]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 05:34:14 np0005540825 serene_leavitt[293672]:                "ceph.osd_id": "1",
Dec  1 05:34:14 np0005540825 serene_leavitt[293672]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 05:34:14 np0005540825 serene_leavitt[293672]:                "ceph.type": "block",
Dec  1 05:34:14 np0005540825 serene_leavitt[293672]:                "ceph.vdo": "0",
Dec  1 05:34:14 np0005540825 serene_leavitt[293672]:                "ceph.with_tpm": "0"
Dec  1 05:34:14 np0005540825 serene_leavitt[293672]:            },
Dec  1 05:34:14 np0005540825 serene_leavitt[293672]:            "type": "block",
Dec  1 05:34:14 np0005540825 serene_leavitt[293672]:            "vg_name": "ceph_vg0"
Dec  1 05:34:14 np0005540825 serene_leavitt[293672]:        }
Dec  1 05:34:14 np0005540825 serene_leavitt[293672]:    ]
Dec  1 05:34:14 np0005540825 serene_leavitt[293672]: }
Dec  1 05:34:14 np0005540825 systemd[1]: libpod-04c97f7d2c38a239aa11ae05364bf32d7440fa4828e2a582200b41d5d5d1c788.scope: Deactivated successfully.
Dec  1 05:34:14 np0005540825 podman[293683]: 2025-12-01 10:34:14.435272607 +0000 UTC m=+0.027674008 container died 04c97f7d2c38a239aa11ae05364bf32d7440fa4828e2a582200b41d5d5d1c788 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:34:14 np0005540825 systemd[1]: var-lib-containers-storage-overlay-be607ff61ac81a3d5aa993922c67faf091d25c1acdba78036341234d859da8c3-merged.mount: Deactivated successfully.
Dec  1 05:34:14 np0005540825 podman[293683]: 2025-12-01 10:34:14.469611711 +0000 UTC m=+0.062013082 container remove 04c97f7d2c38a239aa11ae05364bf32d7440fa4828e2a582200b41d5d5d1c788 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_leavitt, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325)
Dec  1 05:34:14 np0005540825 systemd[1]: libpod-conmon-04c97f7d2c38a239aa11ae05364bf32d7440fa4828e2a582200b41d5d5d1c788.scope: Deactivated successfully.
Dec  1 05:34:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:34:15.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:15 np0005540825 podman[293790]: 2025-12-01 10:34:15.159792909 +0000 UTC m=+0.069188703 container create 382215c3284c1d4cd34c1812dbce9c905fe65cf5bc12e437610b7f778d2e35c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_hypatia, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  1 05:34:15 np0005540825 systemd[1]: Started libpod-conmon-382215c3284c1d4cd34c1812dbce9c905fe65cf5bc12e437610b7f778d2e35c2.scope.
Dec  1 05:34:15 np0005540825 podman[293790]: 2025-12-01 10:34:15.128595268 +0000 UTC m=+0.037991142 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:34:15 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:34:15 np0005540825 podman[293790]: 2025-12-01 10:34:15.251357957 +0000 UTC m=+0.160753791 container init 382215c3284c1d4cd34c1812dbce9c905fe65cf5bc12e437610b7f778d2e35c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec  1 05:34:15 np0005540825 podman[293790]: 2025-12-01 10:34:15.257052199 +0000 UTC m=+0.166448033 container start 382215c3284c1d4cd34c1812dbce9c905fe65cf5bc12e437610b7f778d2e35c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  1 05:34:15 np0005540825 podman[293790]: 2025-12-01 10:34:15.260791318 +0000 UTC m=+0.170187122 container attach 382215c3284c1d4cd34c1812dbce9c905fe65cf5bc12e437610b7f778d2e35c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_hypatia, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:34:15 np0005540825 friendly_hypatia[293806]: 167 167
Dec  1 05:34:15 np0005540825 systemd[1]: libpod-382215c3284c1d4cd34c1812dbce9c905fe65cf5bc12e437610b7f778d2e35c2.scope: Deactivated successfully.
Dec  1 05:34:15 np0005540825 podman[293790]: 2025-12-01 10:34:15.264059655 +0000 UTC m=+0.173455459 container died 382215c3284c1d4cd34c1812dbce9c905fe65cf5bc12e437610b7f778d2e35c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  1 05:34:15 np0005540825 systemd[1]: var-lib-containers-storage-overlay-3ee4f8e6dba3085d05537db1f7004821a4d642bdecdf67a73afa7ea82f67dc7e-merged.mount: Deactivated successfully.
Dec  1 05:34:15 np0005540825 podman[293790]: 2025-12-01 10:34:15.302484058 +0000 UTC m=+0.211879852 container remove 382215c3284c1d4cd34c1812dbce9c905fe65cf5bc12e437610b7f778d2e35c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:34:15 np0005540825 systemd[1]: libpod-conmon-382215c3284c1d4cd34c1812dbce9c905fe65cf5bc12e437610b7f778d2e35c2.scope: Deactivated successfully.
Dec  1 05:34:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:34:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:34:15.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:34:15 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1313: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 558 B/s rd, 0 op/s
Dec  1 05:34:15 np0005540825 podman[293832]: 2025-12-01 10:34:15.482823841 +0000 UTC m=+0.039751480 container create 7012877d1fcf098decee99849a5a7e640f7f16ddb6eab4c26cc4c7d42ebdea63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_hofstadter, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec  1 05:34:15 np0005540825 systemd[1]: Started libpod-conmon-7012877d1fcf098decee99849a5a7e640f7f16ddb6eab4c26cc4c7d42ebdea63.scope.
Dec  1 05:34:15 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:34:15 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b54acbd337d1163fb590801e5d402080afcceaa516ae3f3acffc4b5bee14cb9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:34:15 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b54acbd337d1163fb590801e5d402080afcceaa516ae3f3acffc4b5bee14cb9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:34:15 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b54acbd337d1163fb590801e5d402080afcceaa516ae3f3acffc4b5bee14cb9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:34:15 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b54acbd337d1163fb590801e5d402080afcceaa516ae3f3acffc4b5bee14cb9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:34:15 np0005540825 podman[293832]: 2025-12-01 10:34:15.562940524 +0000 UTC m=+0.119868173 container init 7012877d1fcf098decee99849a5a7e640f7f16ddb6eab4c26cc4c7d42ebdea63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_hofstadter, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:34:15 np0005540825 podman[293832]: 2025-12-01 10:34:15.467370759 +0000 UTC m=+0.024298428 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:34:15 np0005540825 podman[293832]: 2025-12-01 10:34:15.570925366 +0000 UTC m=+0.127853015 container start 7012877d1fcf098decee99849a5a7e640f7f16ddb6eab4c26cc4c7d42ebdea63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Dec  1 05:34:15 np0005540825 podman[293832]: 2025-12-01 10:34:15.575824777 +0000 UTC m=+0.132752446 container attach 7012877d1fcf098decee99849a5a7e640f7f16ddb6eab4c26cc4c7d42ebdea63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:34:15 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:34:16 np0005540825 nova_compute[256151]: 2025-12-01 10:34:16.119 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:34:16 np0005540825 lvm[293925]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 05:34:16 np0005540825 lvm[293925]: VG ceph_vg0 finished
Dec  1 05:34:16 np0005540825 sad_hofstadter[293848]: {}
Dec  1 05:34:16 np0005540825 systemd[1]: libpod-7012877d1fcf098decee99849a5a7e640f7f16ddb6eab4c26cc4c7d42ebdea63.scope: Deactivated successfully.
Dec  1 05:34:16 np0005540825 systemd[1]: libpod-7012877d1fcf098decee99849a5a7e640f7f16ddb6eab4c26cc4c7d42ebdea63.scope: Consumed 1.133s CPU time.
Dec  1 05:34:16 np0005540825 podman[293832]: 2025-12-01 10:34:16.303936345 +0000 UTC m=+0.860863994 container died 7012877d1fcf098decee99849a5a7e640f7f16ddb6eab4c26cc4c7d42ebdea63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  1 05:34:16 np0005540825 systemd[1]: var-lib-containers-storage-overlay-5b54acbd337d1163fb590801e5d402080afcceaa516ae3f3acffc4b5bee14cb9-merged.mount: Deactivated successfully.
Dec  1 05:34:16 np0005540825 podman[293832]: 2025-12-01 10:34:16.352405655 +0000 UTC m=+0.909333304 container remove 7012877d1fcf098decee99849a5a7e640f7f16ddb6eab4c26cc4c7d42ebdea63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_hofstadter, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  1 05:34:16 np0005540825 systemd[1]: libpod-conmon-7012877d1fcf098decee99849a5a7e640f7f16ddb6eab4c26cc4c7d42ebdea63.scope: Deactivated successfully.
Dec  1 05:34:16 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 05:34:16 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:34:16 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 05:34:16 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:34:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:34:17.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:34:17.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:34:17.401Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:34:17 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:34:17 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:34:17 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1314: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 558 B/s rd, 0 op/s
Dec  1 05:34:18 np0005540825 nova_compute[256151]: 2025-12-01 10:34:18.028 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:34:18 np0005540825 podman[293969]: 2025-12-01 10:34:18.225569401 +0000 UTC m=+0.077723131 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  1 05:34:18 np0005540825 nova_compute[256151]: 2025-12-01 10:34:18.370 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:34:18 np0005540825 nova_compute[256151]: 2025-12-01 10:34:18.494 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:34:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:34:18.913Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:34:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:34:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:34:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:34:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:19 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:34:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:34:19.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:34:19.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:19 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1315: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 558 B/s rd, 0 op/s
Dec  1 05:34:20 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:34:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:34:21.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:21 np0005540825 podman[293990]: 2025-12-01 10:34:21.198751679 +0000 UTC m=+0.060022709 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  1 05:34:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:34:21.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:34:21] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:34:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:34:21] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:34:21 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1316: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 837 B/s rd, 0 op/s
Dec  1 05:34:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:34:23.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:34:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:34:23.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:34:23 np0005540825 nova_compute[256151]: 2025-12-01 10:34:23.373 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:34:23 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1317: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:34:23 np0005540825 nova_compute[256151]: 2025-12-01 10:34:23.496 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:34:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:34:23.783Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:34:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:34:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:34:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:24 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:34:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:24 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:34:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:34:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:34:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:34:25.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:34:25.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:25 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1318: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:34:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:34:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:34:27.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:34:27.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:34:27.404Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:34:27 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1319: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:34:28 np0005540825 nova_compute[256151]: 2025-12-01 10:34:28.376 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:34:28 np0005540825 nova_compute[256151]: 2025-12-01 10:34:28.496 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:34:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:34:28.915Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:34:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:34:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:34:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:34:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:29 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:34:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:34:29.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:34:29.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:29 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1320: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:34:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:34:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:34:31.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:31 np0005540825 podman[294045]: 2025-12-01 10:34:31.280836467 +0000 UTC m=+0.129898950 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  1 05:34:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:34:31] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec  1 05:34:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:34:31] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec  1 05:34:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:34:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:34:31.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:34:31 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1321: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:34:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:34:33.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:34:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:34:33.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:34:33 np0005540825 nova_compute[256151]: 2025-12-01 10:34:33.378 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:34:33 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1322: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:34:33 np0005540825 nova_compute[256151]: 2025-12-01 10:34:33.498 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:34:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:34:33.784Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:34:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:34:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:34:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:34:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:34 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:34:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:34:35.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:34:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:34:35.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:34:35 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1323: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:34:35 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:34:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:34:37.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:34:37.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:34:37.405Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:34:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:34:37.405Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:34:37 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1324: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:34:38 np0005540825 nova_compute[256151]: 2025-12-01 10:34:38.380 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:34:38 np0005540825 nova_compute[256151]: 2025-12-01 10:34:38.500 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:34:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:34:38.916Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:34:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:34:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:34:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:34:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:39 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:34:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:34:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:34:39.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:34:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:34:39.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:39 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1325: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:34:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_10:34:39
Dec  1 05:34:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 05:34:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 05:34:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'default.rgw.log', 'vms', 'backups', 'images', '.nfs', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr']
Dec  1 05:34:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 05:34:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:34:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:34:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:34:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:34:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:34:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:34:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:34:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:34:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 05:34:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:34:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:34:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:34:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:34:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:34:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:34:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:34:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:34:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:34:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  1 05:34:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:34:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:34:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:34:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:34:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:34:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:34:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:34:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:34:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:34:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:34:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:34:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:34:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:34:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:34:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 05:34:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 05:34:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:34:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:34:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:34:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:34:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:34:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:34:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:34:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:34:40 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:34:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:34:41.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:34:41] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec  1 05:34:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:34:41] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec  1 05:34:41 np0005540825 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 05:34:41 np0005540825 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 05:34:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:34:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:34:41.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:34:41 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1326: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:34:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:34:43.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:43 np0005540825 nova_compute[256151]: 2025-12-01 10:34:43.381 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:34:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:34:43.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:43 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1327: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:34:43 np0005540825 nova_compute[256151]: 2025-12-01 10:34:43.502 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:34:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:34:43.785Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:34:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:34:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:34:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:34:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:44 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:34:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:34:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:34:45.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:34:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:34:45.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:45 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1328: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:34:45 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:34:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:34:47.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:34:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:34:47.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:34:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:34:47.407Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:34:47 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1329: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:34:48 np0005540825 nova_compute[256151]: 2025-12-01 10:34:48.383 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:34:48 np0005540825 nova_compute[256151]: 2025-12-01 10:34:48.503 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:34:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:34:48.918Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:34:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:34:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:34:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:34:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:49 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:34:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:34:49.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:49 np0005540825 podman[294092]: 2025-12-01 10:34:49.226386106 +0000 UTC m=+0.081557992 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec  1 05:34:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:34:49.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:49 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1330: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:34:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:34:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:34:51.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:34:51] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec  1 05:34:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:34:51] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec  1 05:34:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:34:51.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:51 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1331: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:34:52 np0005540825 podman[294139]: 2025-12-01 10:34:52.223021079 +0000 UTC m=+0.082100277 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.schema-version=1.0)
Dec  1 05:34:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:34:53.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:53 np0005540825 nova_compute[256151]: 2025-12-01 10:34:53.384 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:34:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:34:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:34:53.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:34:53 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1332: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:34:53 np0005540825 nova_compute[256151]: 2025-12-01 10:34:53.505 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:34:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:34:53.786Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:34:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:34:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:34:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:54 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:34:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:54 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:34:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:34:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:34:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:34:55.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:34:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:34:55.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:34:55 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1333: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:34:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:34:55 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Dec  1 05:34:55 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:34:55.896112) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  1 05:34:55 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Dec  1 05:34:55 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764585295896250, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1003, "num_deletes": 255, "total_data_size": 1751894, "memory_usage": 1783504, "flush_reason": "Manual Compaction"}
Dec  1 05:34:55 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Dec  1 05:34:55 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764585295913606, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 1718985, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37096, "largest_seqno": 38098, "table_properties": {"data_size": 1713999, "index_size": 2510, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10753, "raw_average_key_size": 19, "raw_value_size": 1703981, "raw_average_value_size": 3109, "num_data_blocks": 108, "num_entries": 548, "num_filter_entries": 548, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764585212, "oldest_key_time": 1764585212, "file_creation_time": 1764585295, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Dec  1 05:34:55 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 17536 microseconds, and 9717 cpu microseconds.
Dec  1 05:34:55 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 05:34:55 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:34:55.913669) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 1718985 bytes OK
Dec  1 05:34:55 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:34:55.913698) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Dec  1 05:34:55 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:34:55.915538) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Dec  1 05:34:55 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:34:55.915563) EVENT_LOG_v1 {"time_micros": 1764585295915556, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  1 05:34:55 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:34:55.915594) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  1 05:34:55 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 1747246, prev total WAL file size 1747246, number of live WAL files 2.
Dec  1 05:34:55 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:34:55 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:34:55.916604) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303034' seq:72057594037927935, type:22 .. '6C6F676D0031323535' seq:0, type:0; will stop at (end)
Dec  1 05:34:55 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  1 05:34:55 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(1678KB)], [80(12MB)]
Dec  1 05:34:55 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764585295916659, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 15061719, "oldest_snapshot_seqno": -1}
Dec  1 05:34:56 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6776 keys, 14900116 bytes, temperature: kUnknown
Dec  1 05:34:56 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764585296104447, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 14900116, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14855271, "index_size": 26813, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16965, "raw_key_size": 178722, "raw_average_key_size": 26, "raw_value_size": 14733628, "raw_average_value_size": 2174, "num_data_blocks": 1057, "num_entries": 6776, "num_filter_entries": 6776, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764582410, "oldest_key_time": 0, "file_creation_time": 1764585295, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Dec  1 05:34:56 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 05:34:56 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:34:56.104765) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 14900116 bytes
Dec  1 05:34:56 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:34:56.171001) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 80.2 rd, 79.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 12.7 +0.0 blob) out(14.2 +0.0 blob), read-write-amplify(17.4) write-amplify(8.7) OK, records in: 7304, records dropped: 528 output_compression: NoCompression
Dec  1 05:34:56 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:34:56.171049) EVENT_LOG_v1 {"time_micros": 1764585296171029, "job": 46, "event": "compaction_finished", "compaction_time_micros": 187883, "compaction_time_cpu_micros": 31644, "output_level": 6, "num_output_files": 1, "total_output_size": 14900116, "num_input_records": 7304, "num_output_records": 6776, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  1 05:34:56 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:34:56 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764585296172026, "job": 46, "event": "table_file_deletion", "file_number": 82}
Dec  1 05:34:56 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:34:56 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764585296178665, "job": 46, "event": "table_file_deletion", "file_number": 80}
Dec  1 05:34:56 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:34:55.916516) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:34:56 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:34:56.178967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:34:56 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:34:56.178978) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:34:56 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:34:56.178981) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:34:56 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:34:56.178985) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:34:56 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:34:56.178988) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:34:57 np0005540825 nova_compute[256151]: 2025-12-01 10:34:57.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:34:57 np0005540825 nova_compute[256151]: 2025-12-01 10:34:57.028 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 05:34:57 np0005540825 nova_compute[256151]: 2025-12-01 10:34:57.028 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 05:34:57 np0005540825 nova_compute[256151]: 2025-12-01 10:34:57.063 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 05:34:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:34:57.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:34:57.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:34:57.408Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:34:57 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1334: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:34:58 np0005540825 nova_compute[256151]: 2025-12-01 10:34:58.387 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:34:58 np0005540825 nova_compute[256151]: 2025-12-01 10:34:58.507 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:34:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:34:58.918Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:34:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:34:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:59 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:34:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:59 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:34:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:34:59 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:34:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:34:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:34:59.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:34:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:34:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:34:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:34:59.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:34:59 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1335: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:35:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:35:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:35:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:35:01.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:35:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:35:01] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec  1 05:35:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:35:01] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec  1 05:35:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:35:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:35:01.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:35:01 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1336: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:35:02 np0005540825 nova_compute[256151]: 2025-12-01 10:35:02.026 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:35:02 np0005540825 nova_compute[256151]: 2025-12-01 10:35:02.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:35:02 np0005540825 nova_compute[256151]: 2025-12-01 10:35:02.027 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  1 05:35:02 np0005540825 podman[294169]: 2025-12-01 10:35:02.249666478 +0000 UTC m=+0.117715345 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 05:35:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:35:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:35:03.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:35:03 np0005540825 nova_compute[256151]: 2025-12-01 10:35:03.390 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:35:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:35:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:35:03.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:35:03 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1337: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:35:03 np0005540825 nova_compute[256151]: 2025-12-01 10:35:03.509 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:35:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:35:03.787Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:35:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:35:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:35:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:35:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:04 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:35:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:35:04.593 163291 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:35:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:35:04.594 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:35:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:35:04.594 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:35:05 np0005540825 nova_compute[256151]: 2025-12-01 10:35:05.037 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:35:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:35:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:35:05.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:35:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:35:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:35:05.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:35:05 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1338: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:35:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:35:07 np0005540825 nova_compute[256151]: 2025-12-01 10:35:07.026 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:35:07 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:07 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:35:07 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:35:07.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:35:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:35:07.410Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:35:07 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:07 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:35:07 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:35:07.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:35:07 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1339: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:35:08 np0005540825 nova_compute[256151]: 2025-12-01 10:35:08.391 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:35:08 np0005540825 nova_compute[256151]: 2025-12-01 10:35:08.511 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:35:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:35:08.919Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:35:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:35:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:35:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:35:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:09 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:35:09 np0005540825 nova_compute[256151]: 2025-12-01 10:35:09.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:35:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:35:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:35:09.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:35:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:35:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:35:09.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:35:09 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1340: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:35:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:35:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:35:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:35:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:35:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:35:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:35:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:35:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:35:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:35:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:35:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:35:11.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:35:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:35:11] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:35:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:35:11] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:35:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:35:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:35:11.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:35:11 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1341: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:35:12 np0005540825 nova_compute[256151]: 2025-12-01 10:35:12.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:35:12 np0005540825 nova_compute[256151]: 2025-12-01 10:35:12.027 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 05:35:13 np0005540825 nova_compute[256151]: 2025-12-01 10:35:13.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:35:13 np0005540825 nova_compute[256151]: 2025-12-01 10:35:13.050 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:35:13 np0005540825 nova_compute[256151]: 2025-12-01 10:35:13.051 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:35:13 np0005540825 nova_compute[256151]: 2025-12-01 10:35:13.051 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:35:13 np0005540825 nova_compute[256151]: 2025-12-01 10:35:13.051 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 05:35:13 np0005540825 nova_compute[256151]: 2025-12-01 10:35:13.051 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:35:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:35:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:35:13.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:35:13 np0005540825 nova_compute[256151]: 2025-12-01 10:35:13.393 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:35:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:35:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:35:13.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:35:13 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1342: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:35:13 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:35:13 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1567705588' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:35:13 np0005540825 nova_compute[256151]: 2025-12-01 10:35:13.513 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:35:13 np0005540825 nova_compute[256151]: 2025-12-01 10:35:13.520 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:35:13 np0005540825 nova_compute[256151]: 2025-12-01 10:35:13.691 256155 WARNING nova.virt.libvirt.driver [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 05:35:13 np0005540825 nova_compute[256151]: 2025-12-01 10:35:13.694 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4500MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 05:35:13 np0005540825 nova_compute[256151]: 2025-12-01 10:35:13.695 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:35:13 np0005540825 nova_compute[256151]: 2025-12-01 10:35:13.695 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:35:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:35:13.788Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:35:13 np0005540825 nova_compute[256151]: 2025-12-01 10:35:13.802 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 05:35:13 np0005540825 nova_compute[256151]: 2025-12-01 10:35:13.802 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 05:35:13 np0005540825 nova_compute[256151]: 2025-12-01 10:35:13.839 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:35:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:35:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:35:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:35:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:14 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:35:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:35:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3185267231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:35:14 np0005540825 nova_compute[256151]: 2025-12-01 10:35:14.304 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:35:14 np0005540825 nova_compute[256151]: 2025-12-01 10:35:14.311 256155 DEBUG nova.compute.provider_tree [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed in ProviderTree for provider: 5efe20fe-1981-4bd9-8786-d9fddc89a5ae update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 05:35:14 np0005540825 nova_compute[256151]: 2025-12-01 10:35:14.337 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 05:35:14 np0005540825 nova_compute[256151]: 2025-12-01 10:35:14.339 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 05:35:14 np0005540825 nova_compute[256151]: 2025-12-01 10:35:14.339 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:35:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:35:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:35:15.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:35:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:35:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:35:15.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:35:15 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1343: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:35:16 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:35:16 np0005540825 nova_compute[256151]: 2025-12-01 10:35:16.341 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:35:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:35:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:35:17.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:35:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:35:17.411Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:35:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:35:17.411Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:35:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:35:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:35:17.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:35:17 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1344: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:35:17 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:35:17 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:35:17 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 05:35:17 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:35:17 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1345: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 613 B/s rd, 0 op/s
Dec  1 05:35:17 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1346: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:35:17 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 05:35:17 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:35:17 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 05:35:17 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:35:17 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 05:35:17 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 05:35:17 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 05:35:17 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:35:17 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:35:17 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:35:18 np0005540825 nova_compute[256151]: 2025-12-01 10:35:18.394 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:35:18 np0005540825 podman[294454]: 2025-12-01 10:35:18.372986155 +0000 UTC m=+0.024805102 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:35:18 np0005540825 nova_compute[256151]: 2025-12-01 10:35:18.515 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:35:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:35:18.921Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:35:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:35:18.921Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:35:18 np0005540825 podman[294454]: 2025-12-01 10:35:18.943378943 +0000 UTC m=+0.595197850 container create 6fd17783eee4af84b0a5358238d9ad1fda8daaa304d8c45210949b3071983067 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_leavitt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  1 05:35:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:35:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:35:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:35:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:19 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:35:19 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:35:19 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:35:19 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:35:19 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:35:19 np0005540825 nova_compute[256151]: 2025-12-01 10:35:19.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:35:19 np0005540825 systemd[1]: Started libpod-conmon-6fd17783eee4af84b0a5358238d9ad1fda8daaa304d8c45210949b3071983067.scope.
Dec  1 05:35:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:35:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:35:19.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:35:19 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:35:19 np0005540825 podman[294454]: 2025-12-01 10:35:19.267851883 +0000 UTC m=+0.919670790 container init 6fd17783eee4af84b0a5358238d9ad1fda8daaa304d8c45210949b3071983067 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  1 05:35:19 np0005540825 podman[294454]: 2025-12-01 10:35:19.277422647 +0000 UTC m=+0.929241514 container start 6fd17783eee4af84b0a5358238d9ad1fda8daaa304d8c45210949b3071983067 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  1 05:35:19 np0005540825 podman[294454]: 2025-12-01 10:35:19.281585748 +0000 UTC m=+0.933404635 container attach 6fd17783eee4af84b0a5358238d9ad1fda8daaa304d8c45210949b3071983067 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_leavitt, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  1 05:35:19 np0005540825 systemd[1]: libpod-6fd17783eee4af84b0a5358238d9ad1fda8daaa304d8c45210949b3071983067.scope: Deactivated successfully.
Dec  1 05:35:19 np0005540825 romantic_leavitt[294470]: 167 167
Dec  1 05:35:19 np0005540825 conmon[294470]: conmon 6fd17783eee4af84b0a5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6fd17783eee4af84b0a5358238d9ad1fda8daaa304d8c45210949b3071983067.scope/container/memory.events
Dec  1 05:35:19 np0005540825 podman[294454]: 2025-12-01 10:35:19.284743902 +0000 UTC m=+0.936562769 container died 6fd17783eee4af84b0a5358238d9ad1fda8daaa304d8c45210949b3071983067 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:35:19 np0005540825 systemd[1]: var-lib-containers-storage-overlay-aadc581d95454c64f47019ecb56ea5fdab97e137175043e0430627a8cef326b4-merged.mount: Deactivated successfully.
Dec  1 05:35:19 np0005540825 podman[294454]: 2025-12-01 10:35:19.395725468 +0000 UTC m=+1.047544335 container remove 6fd17783eee4af84b0a5358238d9ad1fda8daaa304d8c45210949b3071983067 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  1 05:35:19 np0005540825 systemd[1]: libpod-conmon-6fd17783eee4af84b0a5358238d9ad1fda8daaa304d8c45210949b3071983067.scope: Deactivated successfully.
Dec  1 05:35:19 np0005540825 podman[294476]: 2025-12-01 10:35:19.427183665 +0000 UTC m=+0.116045071 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec  1 05:35:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:35:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:35:19.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:35:19 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1347: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Dec  1 05:35:19 np0005540825 podman[294515]: 2025-12-01 10:35:19.597374727 +0000 UTC m=+0.059904516 container create 70a0f575aaf0099684e42c50f7040fe31e114a28a212dcd55e2e247da60c039a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_gauss, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:35:19 np0005540825 systemd[1]: Started libpod-conmon-70a0f575aaf0099684e42c50f7040fe31e114a28a212dcd55e2e247da60c039a.scope.
Dec  1 05:35:19 np0005540825 podman[294515]: 2025-12-01 10:35:19.567708017 +0000 UTC m=+0.030237856 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:35:19 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:35:19 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bfa23cee4a895f962cd3028e3b293bca58262db15ea18f5cac6c2d6fffbc3de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:35:19 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bfa23cee4a895f962cd3028e3b293bca58262db15ea18f5cac6c2d6fffbc3de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:35:19 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bfa23cee4a895f962cd3028e3b293bca58262db15ea18f5cac6c2d6fffbc3de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:35:19 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bfa23cee4a895f962cd3028e3b293bca58262db15ea18f5cac6c2d6fffbc3de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:35:19 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bfa23cee4a895f962cd3028e3b293bca58262db15ea18f5cac6c2d6fffbc3de/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:35:19 np0005540825 podman[294515]: 2025-12-01 10:35:19.714041974 +0000 UTC m=+0.176571743 container init 70a0f575aaf0099684e42c50f7040fe31e114a28a212dcd55e2e247da60c039a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  1 05:35:19 np0005540825 podman[294515]: 2025-12-01 10:35:19.725978271 +0000 UTC m=+0.188508020 container start 70a0f575aaf0099684e42c50f7040fe31e114a28a212dcd55e2e247da60c039a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_gauss, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  1 05:35:19 np0005540825 podman[294515]: 2025-12-01 10:35:19.728779496 +0000 UTC m=+0.191309295 container attach 70a0f575aaf0099684e42c50f7040fe31e114a28a212dcd55e2e247da60c039a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  1 05:35:20 np0005540825 kind_gauss[294532]: --> passed data devices: 0 physical, 1 LVM
Dec  1 05:35:20 np0005540825 kind_gauss[294532]: --> All data devices are unavailable
Dec  1 05:35:20 np0005540825 systemd[1]: libpod-70a0f575aaf0099684e42c50f7040fe31e114a28a212dcd55e2e247da60c039a.scope: Deactivated successfully.
Dec  1 05:35:20 np0005540825 podman[294515]: 2025-12-01 10:35:20.077940873 +0000 UTC m=+0.540470662 container died 70a0f575aaf0099684e42c50f7040fe31e114a28a212dcd55e2e247da60c039a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_gauss, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  1 05:35:20 np0005540825 systemd[1]: var-lib-containers-storage-overlay-9bfa23cee4a895f962cd3028e3b293bca58262db15ea18f5cac6c2d6fffbc3de-merged.mount: Deactivated successfully.
Dec  1 05:35:20 np0005540825 podman[294515]: 2025-12-01 10:35:20.138735352 +0000 UTC m=+0.601265111 container remove 70a0f575aaf0099684e42c50f7040fe31e114a28a212dcd55e2e247da60c039a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:35:20 np0005540825 systemd[1]: libpod-conmon-70a0f575aaf0099684e42c50f7040fe31e114a28a212dcd55e2e247da60c039a.scope: Deactivated successfully.
Dec  1 05:35:20 np0005540825 podman[294657]: 2025-12-01 10:35:20.78459923 +0000 UTC m=+0.040691555 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:35:21 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:35:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:35:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:35:21.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:35:21 np0005540825 podman[294657]: 2025-12-01 10:35:21.238394643 +0000 UTC m=+0.494486978 container create 6872832aa713c96c15127612dbb1ddd9f1390be542b4fc211f26eee634b0b637 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec  1 05:35:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:35:21] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:35:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:35:21] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:35:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:35:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:35:21.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:35:21 np0005540825 systemd[1]: Started libpod-conmon-6872832aa713c96c15127612dbb1ddd9f1390be542b4fc211f26eee634b0b637.scope.
Dec  1 05:35:21 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1348: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:35:21 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:35:21 np0005540825 podman[294657]: 2025-12-01 10:35:21.879699878 +0000 UTC m=+1.135792213 container init 6872832aa713c96c15127612dbb1ddd9f1390be542b4fc211f26eee634b0b637 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_merkle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:35:21 np0005540825 podman[294657]: 2025-12-01 10:35:21.892669934 +0000 UTC m=+1.148762229 container start 6872832aa713c96c15127612dbb1ddd9f1390be542b4fc211f26eee634b0b637 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  1 05:35:21 np0005540825 happy_merkle[294673]: 167 167
Dec  1 05:35:21 np0005540825 systemd[1]: libpod-6872832aa713c96c15127612dbb1ddd9f1390be542b4fc211f26eee634b0b637.scope: Deactivated successfully.
Dec  1 05:35:22 np0005540825 podman[294657]: 2025-12-01 10:35:22.028371687 +0000 UTC m=+1.284464032 container attach 6872832aa713c96c15127612dbb1ddd9f1390be542b4fc211f26eee634b0b637 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_merkle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:35:22 np0005540825 podman[294657]: 2025-12-01 10:35:22.03035648 +0000 UTC m=+1.286448795 container died 6872832aa713c96c15127612dbb1ddd9f1390be542b4fc211f26eee634b0b637 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  1 05:35:22 np0005540825 systemd[1]: var-lib-containers-storage-overlay-d73325e1d84289a9bd39ebbc30f050cc5e34da5b8ec8630bbe9d573078cc8f2e-merged.mount: Deactivated successfully.
Dec  1 05:35:22 np0005540825 podman[294657]: 2025-12-01 10:35:22.089832344 +0000 UTC m=+1.345924639 container remove 6872832aa713c96c15127612dbb1ddd9f1390be542b4fc211f26eee634b0b637 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_merkle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:35:22 np0005540825 systemd[1]: libpod-conmon-6872832aa713c96c15127612dbb1ddd9f1390be542b4fc211f26eee634b0b637.scope: Deactivated successfully.
Dec  1 05:35:22 np0005540825 podman[294702]: 2025-12-01 10:35:22.28608654 +0000 UTC m=+0.049469239 container create b67e9c4e273f46095bc22ced2d0a8e195cfc13c461966971510eaa2eabf0da25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_cannon, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:35:22 np0005540825 systemd[1]: Started libpod-conmon-b67e9c4e273f46095bc22ced2d0a8e195cfc13c461966971510eaa2eabf0da25.scope.
Dec  1 05:35:22 np0005540825 podman[294702]: 2025-12-01 10:35:22.260706224 +0000 UTC m=+0.024088993 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:35:22 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:35:22 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7109d9af1828bfe5531f794daceb26c60fabba074768e435dddcb6bd4ede66f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:35:22 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7109d9af1828bfe5531f794daceb26c60fabba074768e435dddcb6bd4ede66f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:35:22 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7109d9af1828bfe5531f794daceb26c60fabba074768e435dddcb6bd4ede66f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:35:22 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7109d9af1828bfe5531f794daceb26c60fabba074768e435dddcb6bd4ede66f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:35:22 np0005540825 podman[294702]: 2025-12-01 10:35:22.383080002 +0000 UTC m=+0.146462711 container init b67e9c4e273f46095bc22ced2d0a8e195cfc13c461966971510eaa2eabf0da25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_cannon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  1 05:35:22 np0005540825 podman[294702]: 2025-12-01 10:35:22.389283247 +0000 UTC m=+0.152665946 container start b67e9c4e273f46095bc22ced2d0a8e195cfc13c461966971510eaa2eabf0da25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_cannon, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:35:22 np0005540825 podman[294702]: 2025-12-01 10:35:22.392629287 +0000 UTC m=+0.156011986 container attach b67e9c4e273f46095bc22ced2d0a8e195cfc13c461966971510eaa2eabf0da25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_cannon, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 05:35:22 np0005540825 podman[294716]: 2025-12-01 10:35:22.402797407 +0000 UTC m=+0.068184716 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  1 05:35:22 np0005540825 elastic_cannon[294724]: {
Dec  1 05:35:22 np0005540825 elastic_cannon[294724]:    "1": [
Dec  1 05:35:22 np0005540825 elastic_cannon[294724]:        {
Dec  1 05:35:22 np0005540825 elastic_cannon[294724]:            "devices": [
Dec  1 05:35:22 np0005540825 elastic_cannon[294724]:                "/dev/loop3"
Dec  1 05:35:22 np0005540825 elastic_cannon[294724]:            ],
Dec  1 05:35:22 np0005540825 elastic_cannon[294724]:            "lv_name": "ceph_lv0",
Dec  1 05:35:22 np0005540825 elastic_cannon[294724]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:35:22 np0005540825 elastic_cannon[294724]:            "lv_size": "21470642176",
Dec  1 05:35:22 np0005540825 elastic_cannon[294724]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 05:35:22 np0005540825 elastic_cannon[294724]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:35:22 np0005540825 elastic_cannon[294724]:            "name": "ceph_lv0",
Dec  1 05:35:22 np0005540825 elastic_cannon[294724]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:35:22 np0005540825 elastic_cannon[294724]:            "tags": {
Dec  1 05:35:22 np0005540825 elastic_cannon[294724]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:35:22 np0005540825 elastic_cannon[294724]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:35:22 np0005540825 elastic_cannon[294724]:                "ceph.cephx_lockbox_secret": "",
Dec  1 05:35:22 np0005540825 elastic_cannon[294724]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 05:35:22 np0005540825 elastic_cannon[294724]:                "ceph.cluster_name": "ceph",
Dec  1 05:35:22 np0005540825 elastic_cannon[294724]:                "ceph.crush_device_class": "",
Dec  1 05:35:22 np0005540825 elastic_cannon[294724]:                "ceph.encrypted": "0",
Dec  1 05:35:22 np0005540825 elastic_cannon[294724]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 05:35:22 np0005540825 elastic_cannon[294724]:                "ceph.osd_id": "1",
Dec  1 05:35:22 np0005540825 elastic_cannon[294724]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 05:35:22 np0005540825 elastic_cannon[294724]:                "ceph.type": "block",
Dec  1 05:35:22 np0005540825 elastic_cannon[294724]:                "ceph.vdo": "0",
Dec  1 05:35:22 np0005540825 elastic_cannon[294724]:                "ceph.with_tpm": "0"
Dec  1 05:35:22 np0005540825 elastic_cannon[294724]:            },
Dec  1 05:35:22 np0005540825 elastic_cannon[294724]:            "type": "block",
Dec  1 05:35:22 np0005540825 elastic_cannon[294724]:            "vg_name": "ceph_vg0"
Dec  1 05:35:22 np0005540825 elastic_cannon[294724]:        }
Dec  1 05:35:22 np0005540825 elastic_cannon[294724]:    ]
Dec  1 05:35:22 np0005540825 elastic_cannon[294724]: }
Dec  1 05:35:22 np0005540825 systemd[1]: libpod-b67e9c4e273f46095bc22ced2d0a8e195cfc13c461966971510eaa2eabf0da25.scope: Deactivated successfully.
Dec  1 05:35:22 np0005540825 podman[294702]: 2025-12-01 10:35:22.706379141 +0000 UTC m=+0.469761860 container died b67e9c4e273f46095bc22ced2d0a8e195cfc13c461966971510eaa2eabf0da25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  1 05:35:22 np0005540825 systemd[1]: var-lib-containers-storage-overlay-7109d9af1828bfe5531f794daceb26c60fabba074768e435dddcb6bd4ede66f5-merged.mount: Deactivated successfully.
Dec  1 05:35:22 np0005540825 podman[294702]: 2025-12-01 10:35:22.817577612 +0000 UTC m=+0.580960311 container remove b67e9c4e273f46095bc22ced2d0a8e195cfc13c461966971510eaa2eabf0da25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_cannon, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:35:22 np0005540825 systemd[1]: libpod-conmon-b67e9c4e273f46095bc22ced2d0a8e195cfc13c461966971510eaa2eabf0da25.scope: Deactivated successfully.
Dec  1 05:35:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:35:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:35:23.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:35:23 np0005540825 nova_compute[256151]: 2025-12-01 10:35:23.397 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:35:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:35:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:35:23.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:35:23 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1349: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:35:23 np0005540825 nova_compute[256151]: 2025-12-01 10:35:23.516 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:35:23 np0005540825 podman[294849]: 2025-12-01 10:35:23.526295823 +0000 UTC m=+0.065836934 container create c8b10131af5fd4f14ba892d69f84ecad2ba582816d5c5a64c4b7a44871a313ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_wescoff, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  1 05:35:23 np0005540825 systemd[1]: Started libpod-conmon-c8b10131af5fd4f14ba892d69f84ecad2ba582816d5c5a64c4b7a44871a313ae.scope.
Dec  1 05:35:23 np0005540825 podman[294849]: 2025-12-01 10:35:23.497589229 +0000 UTC m=+0.037130320 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:35:23 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:35:23 np0005540825 podman[294849]: 2025-12-01 10:35:23.618015876 +0000 UTC m=+0.157556987 container init c8b10131af5fd4f14ba892d69f84ecad2ba582816d5c5a64c4b7a44871a313ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_wescoff, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:35:23 np0005540825 podman[294849]: 2025-12-01 10:35:23.625661319 +0000 UTC m=+0.165202390 container start c8b10131af5fd4f14ba892d69f84ecad2ba582816d5c5a64c4b7a44871a313ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  1 05:35:23 np0005540825 podman[294849]: 2025-12-01 10:35:23.629997725 +0000 UTC m=+0.169538896 container attach c8b10131af5fd4f14ba892d69f84ecad2ba582816d5c5a64c4b7a44871a313ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_wescoff, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 05:35:23 np0005540825 admiring_wescoff[294865]: 167 167
Dec  1 05:35:23 np0005540825 systemd[1]: libpod-c8b10131af5fd4f14ba892d69f84ecad2ba582816d5c5a64c4b7a44871a313ae.scope: Deactivated successfully.
Dec  1 05:35:23 np0005540825 podman[294849]: 2025-12-01 10:35:23.633334463 +0000 UTC m=+0.172875554 container died c8b10131af5fd4f14ba892d69f84ecad2ba582816d5c5a64c4b7a44871a313ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_wescoff, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  1 05:35:23 np0005540825 systemd[1]: var-lib-containers-storage-overlay-c7e35039cabe33affdd31b8a958988338e85e9675d863be774196fa55a3af5ca-merged.mount: Deactivated successfully.
Dec  1 05:35:23 np0005540825 podman[294849]: 2025-12-01 10:35:23.697895603 +0000 UTC m=+0.237436674 container remove c8b10131af5fd4f14ba892d69f84ecad2ba582816d5c5a64c4b7a44871a313ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_wescoff, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:35:23 np0005540825 systemd[1]: libpod-conmon-c8b10131af5fd4f14ba892d69f84ecad2ba582816d5c5a64c4b7a44871a313ae.scope: Deactivated successfully.
Dec  1 05:35:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:35:23.789Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:35:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:35:23.789Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:35:23 np0005540825 podman[294893]: 2025-12-01 10:35:23.847798454 +0000 UTC m=+0.039791350 container create e25362d04f9f593addc0582029231201b83bd9b45d06074f76abceac1a38f825 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_sanderson, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec  1 05:35:23 np0005540825 systemd[1]: Started libpod-conmon-e25362d04f9f593addc0582029231201b83bd9b45d06074f76abceac1a38f825.scope.
Dec  1 05:35:23 np0005540825 podman[294893]: 2025-12-01 10:35:23.830139734 +0000 UTC m=+0.022132630 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:35:23 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:35:23 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8eff77ccbf391b13eda977e2ed8d1571f12645530a4c5c9fb73771f41ddb22fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:35:23 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8eff77ccbf391b13eda977e2ed8d1571f12645530a4c5c9fb73771f41ddb22fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:35:23 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8eff77ccbf391b13eda977e2ed8d1571f12645530a4c5c9fb73771f41ddb22fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:35:23 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8eff77ccbf391b13eda977e2ed8d1571f12645530a4c5c9fb73771f41ddb22fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:35:23 np0005540825 podman[294893]: 2025-12-01 10:35:23.996458582 +0000 UTC m=+0.188451478 container init e25362d04f9f593addc0582029231201b83bd9b45d06074f76abceac1a38f825 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_sanderson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec  1 05:35:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:35:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:35:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:35:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:24 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:35:24 np0005540825 podman[294893]: 2025-12-01 10:35:24.007752523 +0000 UTC m=+0.199745429 container start e25362d04f9f593addc0582029231201b83bd9b45d06074f76abceac1a38f825 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_sanderson, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  1 05:35:24 np0005540825 podman[294893]: 2025-12-01 10:35:24.012063208 +0000 UTC m=+0.204056144 container attach e25362d04f9f593addc0582029231201b83bd9b45d06074f76abceac1a38f825 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:35:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:35:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:35:24 np0005540825 lvm[294983]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 05:35:24 np0005540825 lvm[294983]: VG ceph_vg0 finished
Dec  1 05:35:24 np0005540825 modest_sanderson[294909]: {}
Dec  1 05:35:24 np0005540825 systemd[1]: libpod-e25362d04f9f593addc0582029231201b83bd9b45d06074f76abceac1a38f825.scope: Deactivated successfully.
Dec  1 05:35:24 np0005540825 podman[294893]: 2025-12-01 10:35:24.730120948 +0000 UTC m=+0.922113814 container died e25362d04f9f593addc0582029231201b83bd9b45d06074f76abceac1a38f825 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_sanderson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  1 05:35:24 np0005540825 systemd[1]: libpod-e25362d04f9f593addc0582029231201b83bd9b45d06074f76abceac1a38f825.scope: Consumed 1.186s CPU time.
Dec  1 05:35:24 np0005540825 systemd[1]: var-lib-containers-storage-overlay-8eff77ccbf391b13eda977e2ed8d1571f12645530a4c5c9fb73771f41ddb22fa-merged.mount: Deactivated successfully.
Dec  1 05:35:24 np0005540825 podman[294893]: 2025-12-01 10:35:24.937221933 +0000 UTC m=+1.129214839 container remove e25362d04f9f593addc0582029231201b83bd9b45d06074f76abceac1a38f825 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  1 05:35:24 np0005540825 systemd[1]: libpod-conmon-e25362d04f9f593addc0582029231201b83bd9b45d06074f76abceac1a38f825.scope: Deactivated successfully.
Dec  1 05:35:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 05:35:25 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:35:25 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 05:35:25 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:35:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:35:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:35:25.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:35:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:35:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:35:25.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:35:25 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1350: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Dec  1 05:35:26 np0005540825 nova_compute[256151]: 2025-12-01 10:35:26.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:35:26 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:35:26 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:35:26 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:35:27 np0005540825 nova_compute[256151]: 2025-12-01 10:35:27.045 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:35:27 np0005540825 nova_compute[256151]: 2025-12-01 10:35:27.045 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  1 05:35:27 np0005540825 nova_compute[256151]: 2025-12-01 10:35:27.061 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  1 05:35:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:35:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:35:27.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:35:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:35:27.412Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:35:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:35:27.413Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:35:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:35:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:35:27.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:35:27 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1351: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 0 op/s
Dec  1 05:35:28 np0005540825 nova_compute[256151]: 2025-12-01 10:35:28.398 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:35:28 np0005540825 nova_compute[256151]: 2025-12-01 10:35:28.517 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:35:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:35:28.921Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:35:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:29 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:35:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:29 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:35:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:29 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:35:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:29 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:35:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:35:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:35:29.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:35:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:35:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:35:29.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:35:29 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1352: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:35:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:35:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:35:31.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:35:31 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:35:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:35:31] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec  1 05:35:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:35:31] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec  1 05:35:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:35:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:35:31.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:35:31 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1353: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:35:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:35:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:35:33.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:35:33 np0005540825 podman[295058]: 2025-12-01 10:35:33.217283964 +0000 UTC m=+0.086130074 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  1 05:35:33 np0005540825 nova_compute[256151]: 2025-12-01 10:35:33.401 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:35:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:35:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:35:33.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:35:33 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1354: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:35:33 np0005540825 nova_compute[256151]: 2025-12-01 10:35:33.520 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:35:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:35:33.790Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:35:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:35:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:35:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:34 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:35:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:34 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:35:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:35:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:35:35.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:35:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:35:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:35:35.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:35:35 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1355: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:35:36 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:35:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:35:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:35:37.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:35:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:35:37.414Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:35:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:35:37.414Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:35:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:35:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:35:37.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:35:37 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1356: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:35:38 np0005540825 nova_compute[256151]: 2025-12-01 10:35:38.404 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:35:38 np0005540825 nova_compute[256151]: 2025-12-01 10:35:38.521 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:35:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:35:38.922Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:35:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:35:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:39 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:35:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:39 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:35:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:39 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:35:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:35:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:35:39.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:35:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:35:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:35:39.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:35:39 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1357: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:35:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_10:35:39
Dec  1 05:35:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 05:35:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 05:35:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['default.rgw.control', 'volumes', '.rgw.root', 'vms', '.nfs', 'default.rgw.meta', 'backups', '.mgr', 'default.rgw.log', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Dec  1 05:35:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 05:35:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:35:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:35:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:35:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:35:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:35:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:35:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:35:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:35:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 05:35:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:35:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:35:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:35:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:35:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:35:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:35:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:35:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:35:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:35:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  1 05:35:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:35:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:35:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:35:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:35:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:35:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:35:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:35:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:35:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:35:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:35:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:35:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:35:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:35:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:35:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 05:35:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:35:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 05:35:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:35:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:35:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:35:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:35:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:35:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:35:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:35:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:35:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:35:41.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:35:41 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:35:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:35:41] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec  1 05:35:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:35:41] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec  1 05:35:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:35:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:35:41.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:35:41 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1358: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:35:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:35:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:35:43.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:35:43 np0005540825 nova_compute[256151]: 2025-12-01 10:35:43.404 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:35:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:35:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:35:43.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:35:43 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1359: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:35:43 np0005540825 nova_compute[256151]: 2025-12-01 10:35:43.521 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:35:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:35:43.791Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:35:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:35:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:35:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:35:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:44 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:35:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:35:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:35:45.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:35:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:35:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:35:45.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:35:45 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1360: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:35:46 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:35:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:35:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:35:47.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:35:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:35:47.414Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:35:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:35:47.414Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:35:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:35:47.414Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:35:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:35:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:35:47.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:35:47 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1361: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:35:48 np0005540825 nova_compute[256151]: 2025-12-01 10:35:48.406 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:35:48 np0005540825 nova_compute[256151]: 2025-12-01 10:35:48.523 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:35:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:35:48.923Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:35:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:35:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:35:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:35:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:49 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:35:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:35:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:35:49.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:35:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:35:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:35:49.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:35:49 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1362: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:35:50 np0005540825 podman[295103]: 2025-12-01 10:35:50.191561554 +0000 UTC m=+0.055716477 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec  1 05:35:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:35:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:35:51.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:35:51 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:35:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:35:51] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec  1 05:35:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:35:51] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec  1 05:35:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:35:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:35:51.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:35:51 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1363: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:35:53 np0005540825 podman[295152]: 2025-12-01 10:35:53.227110721 +0000 UTC m=+0.089818837 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Dec  1 05:35:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:35:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:35:53.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:35:53 np0005540825 nova_compute[256151]: 2025-12-01 10:35:53.408 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:35:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:35:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:35:53.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:35:53 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1364: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:35:53 np0005540825 nova_compute[256151]: 2025-12-01 10:35:53.524 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:35:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:35:53.792Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:35:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:35:53.792Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:35:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:35:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:35:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:35:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:54 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:35:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:35:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:35:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:35:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:35:55.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:35:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:35:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:35:55.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:35:55 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1365: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:35:56 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:35:57 np0005540825 nova_compute[256151]: 2025-12-01 10:35:57.042 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:35:57 np0005540825 nova_compute[256151]: 2025-12-01 10:35:57.043 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 05:35:57 np0005540825 nova_compute[256151]: 2025-12-01 10:35:57.043 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 05:35:57 np0005540825 nova_compute[256151]: 2025-12-01 10:35:57.085 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 05:35:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:35:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:35:57.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:35:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:35:57.415Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:35:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:35:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:35:57.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:35:57 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1366: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:35:58 np0005540825 nova_compute[256151]: 2025-12-01 10:35:58.410 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:35:58 np0005540825 nova_compute[256151]: 2025-12-01 10:35:58.525 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:35:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:35:58.924Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:35:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:35:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:35:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:35:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:35:59 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:35:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:35:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:35:59.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:35:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:35:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:35:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:35:59.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:35:59 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1367: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:36:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:36:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:36:01.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:36:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:36:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:36:01] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec  1 05:36:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:36:01] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec  1 05:36:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:36:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:36:01.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:36:01 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1368: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:36:02 np0005540825 nova_compute[256151]: 2025-12-01 10:36:02.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:36:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:36:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:36:03.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:36:03 np0005540825 nova_compute[256151]: 2025-12-01 10:36:03.413 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:36:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:36:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:36:03.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:36:03 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1369: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:36:03 np0005540825 nova_compute[256151]: 2025-12-01 10:36:03.527 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:36:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:36:03.793Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:36:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:36:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:36:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:36:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:04 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:36:04 np0005540825 podman[295184]: 2025-12-01 10:36:04.291699329 +0000 UTC m=+0.144236798 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 05:36:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:36:04.595 163291 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:36:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:36:04.596 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:36:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:36:04.596 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:36:05 np0005540825 nova_compute[256151]: 2025-12-01 10:36:05.022 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:36:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:36:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:36:05.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:36:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:36:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:36:05.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:36:05 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1370: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:36:06 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:36:07 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:07 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:36:07 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:36:07.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:36:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:36:07.416Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:36:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:36:07.416Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:36:07 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:07 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:36:07 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:36:07.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:36:07 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1371: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:36:08 np0005540825 nova_compute[256151]: 2025-12-01 10:36:08.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:36:08 np0005540825 nova_compute[256151]: 2025-12-01 10:36:08.416 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:36:08 np0005540825 nova_compute[256151]: 2025-12-01 10:36:08.529 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:36:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:36:08.926Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:36:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:36:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:09 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:36:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:09 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:36:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:09 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:36:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:36:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:36:09.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:36:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:36:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:36:09.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:36:09 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1372: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:36:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:36:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:36:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:36:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:36:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:36:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:36:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:36:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:36:10 np0005540825 nova_compute[256151]: 2025-12-01 10:36:10.022 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:36:10 np0005540825 nova_compute[256151]: 2025-12-01 10:36:10.061 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:36:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:36:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:36:11.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:36:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:36:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:36:11] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:36:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:36:11] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:36:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:36:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:36:11.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:36:11 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1373: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:36:13 np0005540825 nova_compute[256151]: 2025-12-01 10:36:13.026 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:36:13 np0005540825 nova_compute[256151]: 2025-12-01 10:36:13.081 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:36:13 np0005540825 nova_compute[256151]: 2025-12-01 10:36:13.081 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:36:13 np0005540825 nova_compute[256151]: 2025-12-01 10:36:13.082 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:36:13 np0005540825 nova_compute[256151]: 2025-12-01 10:36:13.082 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 05:36:13 np0005540825 nova_compute[256151]: 2025-12-01 10:36:13.082 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:36:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:36:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:36:13.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:36:13 np0005540825 nova_compute[256151]: 2025-12-01 10:36:13.419 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:36:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:36:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:36:13.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:36:13 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1374: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:36:13 np0005540825 nova_compute[256151]: 2025-12-01 10:36:13.531 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:36:13 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:36:13 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3478462913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:36:13 np0005540825 nova_compute[256151]: 2025-12-01 10:36:13.564 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:36:13 np0005540825 nova_compute[256151]: 2025-12-01 10:36:13.736 256155 WARNING nova.virt.libvirt.driver [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 05:36:13 np0005540825 nova_compute[256151]: 2025-12-01 10:36:13.738 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4481MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 05:36:13 np0005540825 nova_compute[256151]: 2025-12-01 10:36:13.738 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:36:13 np0005540825 nova_compute[256151]: 2025-12-01 10:36:13.739 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:36:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:36:13.795Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:36:13 np0005540825 nova_compute[256151]: 2025-12-01 10:36:13.879 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 05:36:13 np0005540825 nova_compute[256151]: 2025-12-01 10:36:13.880 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 05:36:13 np0005540825 nova_compute[256151]: 2025-12-01 10:36:13.938 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Refreshing inventories for resource provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  1 05:36:13 np0005540825 nova_compute[256151]: 2025-12-01 10:36:13.999 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Updating ProviderTree inventory for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  1 05:36:14 np0005540825 nova_compute[256151]: 2025-12-01 10:36:13.999 256155 DEBUG nova.compute.provider_tree [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Updating inventory in ProviderTree for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 05:36:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:36:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:36:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:14 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:36:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:14 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:36:14 np0005540825 nova_compute[256151]: 2025-12-01 10:36:14.015 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Refreshing aggregate associations for resource provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  1 05:36:14 np0005540825 nova_compute[256151]: 2025-12-01 10:36:14.039 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Refreshing trait associations for resource provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae, traits: HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE4A,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_MMX,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_BMI,HW_CPU_X86_SVM,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AMD_SVM,HW_CPU_X86_BMI2,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE42,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,HW_CPU_X86_ABM,COMPUTE_SECURITY_UEFI_SECURE_BOOT _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  1 05:36:14 np0005540825 nova_compute[256151]: 2025-12-01 10:36:14.053 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:36:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:36:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1357080478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:36:14 np0005540825 nova_compute[256151]: 2025-12-01 10:36:14.503 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:36:14 np0005540825 nova_compute[256151]: 2025-12-01 10:36:14.510 256155 DEBUG nova.compute.provider_tree [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed in ProviderTree for provider: 5efe20fe-1981-4bd9-8786-d9fddc89a5ae update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 05:36:15 np0005540825 nova_compute[256151]: 2025-12-01 10:36:15.181 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 05:36:15 np0005540825 nova_compute[256151]: 2025-12-01 10:36:15.182 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 05:36:15 np0005540825 nova_compute[256151]: 2025-12-01 10:36:15.182 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.444s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:36:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:36:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:36:15.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:36:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:36:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:36:15.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:36:15 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1375: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:36:16 np0005540825 nova_compute[256151]: 2025-12-01 10:36:16.184 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:36:16 np0005540825 nova_compute[256151]: 2025-12-01 10:36:16.184 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:36:16 np0005540825 nova_compute[256151]: 2025-12-01 10:36:16.184 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 05:36:16 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:36:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.002000054s ======
Dec  1 05:36:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:36:17.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec  1 05:36:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:36:17.417Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:36:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:36:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:36:17.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:36:17 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1376: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:36:18 np0005540825 nova_compute[256151]: 2025-12-01 10:36:18.421 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:36:18 np0005540825 nova_compute[256151]: 2025-12-01 10:36:18.534 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:36:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:36:18.927Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:36:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:36:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:36:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:36:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:19 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:36:19 np0005540825 nova_compute[256151]: 2025-12-01 10:36:19.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:36:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.002000054s ======
Dec  1 05:36:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:36:19.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec  1 05:36:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:36:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:36:19.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:36:19 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1377: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:36:21 np0005540825 podman[295297]: 2025-12-01 10:36:21.178318029 +0000 UTC m=+0.046036159 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:36:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:36:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:36:21.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:36:21 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:36:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:36:21] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:36:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:36:21] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Dec  1 05:36:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:36:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:36:21.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:36:21 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1378: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:36:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:36:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:36:23.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:36:23 np0005540825 nova_compute[256151]: 2025-12-01 10:36:23.423 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:36:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:36:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:36:23.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:36:23 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1379: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:36:23 np0005540825 nova_compute[256151]: 2025-12-01 10:36:23.535 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:36:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:36:23.796Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:36:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:36:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:36:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:36:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:24 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:36:24 np0005540825 podman[295320]: 2025-12-01 10:36:24.20761387 +0000 UTC m=+0.073712538 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 05:36:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:36:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:36:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:36:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:36:25.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:36:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:36:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:36:25.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:36:25 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1380: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:36:26 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:36:26 np0005540825 podman[295466]: 2025-12-01 10:36:26.98724928 +0000 UTC m=+0.887615549 container exec 04e54403a63b389bbbec1024baf07fe083822adf8debfd9c927a52a06f70b8a1 (image=quay.io/ceph/ceph:v19, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:36:27 np0005540825 podman[295466]: 2025-12-01 10:36:27.11997935 +0000 UTC m=+1.020345659 container exec_died 04e54403a63b389bbbec1024baf07fe083822adf8debfd9c927a52a06f70b8a1 (image=quay.io/ceph/ceph:v19, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mon-compute-0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True)
Dec  1 05:36:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:36:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:36:27.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:36:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:36:27.418Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:36:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:36:27.418Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:36:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:36:27.419Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:36:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:36:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:36:27.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:36:27 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1381: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:36:27 np0005540825 podman[295586]: 2025-12-01 10:36:27.606984642 +0000 UTC m=+0.101493349 container exec 6f6cf01cf4add71c311676e9908aca30b90b94b7eb4eed46b57a6078721d520f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 05:36:27 np0005540825 podman[295586]: 2025-12-01 10:36:27.619729792 +0000 UTC m=+0.114238439 container exec_died 6f6cf01cf4add71c311676e9908aca30b90b94b7eb4eed46b57a6078721d520f (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 05:36:27 np0005540825 podman[295677]: 2025-12-01 10:36:27.961221842 +0000 UTC m=+0.067442740 container exec 7a97e5c792e90c0e9beef244d64f90b782f45501ef79e0290396630e04fbacec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:36:27 np0005540825 podman[295677]: 2025-12-01 10:36:27.975795701 +0000 UTC m=+0.082016579 container exec_died 7a97e5c792e90c0e9beef244d64f90b782f45501ef79e0290396630e04fbacec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:36:28 np0005540825 podman[295743]: 2025-12-01 10:36:28.213890072 +0000 UTC m=+0.062228001 container exec 0ce6b28b78cdc773acbae8987038033199adf9f2d08be5b101f663b41bdbf569 (image=quay.io/ceph/haproxy:2.3, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd)
Dec  1 05:36:28 np0005540825 podman[295743]: 2025-12-01 10:36:28.220522799 +0000 UTC m=+0.068860738 container exec_died 0ce6b28b78cdc773acbae8987038033199adf9f2d08be5b101f663b41bdbf569 (image=quay.io/ceph/haproxy:2.3, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-haproxy-nfs-cephfs-compute-0-alcixd)
Dec  1 05:36:28 np0005540825 nova_compute[256151]: 2025-12-01 10:36:28.426 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:36:28 np0005540825 podman[295808]: 2025-12-01 10:36:28.459013211 +0000 UTC m=+0.062283923 container exec a5bc912f6140365e8fac95a046d1f1cd854ca55aaf2d1e10454f7fa95d0346ac (image=quay.io/ceph/keepalived:2.2.4, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-nfs-cephfs-compute-0-gzwexr, name=keepalived, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, vendor=Red Hat, Inc., version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1793, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Dec  1 05:36:28 np0005540825 podman[295808]: 2025-12-01 10:36:28.472800609 +0000 UTC m=+0.076071321 container exec_died a5bc912f6140365e8fac95a046d1f1cd854ca55aaf2d1e10454f7fa95d0346ac (image=quay.io/ceph/keepalived:2.2.4, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-keepalived-nfs-cephfs-compute-0-gzwexr, io.buildah.version=1.28.2, io.openshift.expose-services=, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, distribution-scope=public, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, version=2.2.4, build-date=2023-02-22T09:23:20, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9)
Dec  1 05:36:28 np0005540825 nova_compute[256151]: 2025-12-01 10:36:28.536 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:36:28 np0005540825 podman[295873]: 2025-12-01 10:36:28.70789341 +0000 UTC m=+0.066867205 container exec fa43ac72a8a6a2863fa517cbc53fe118714aa74f1d9b620c1e40de173c893c3c (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 05:36:28 np0005540825 podman[295873]: 2025-12-01 10:36:28.747748903 +0000 UTC m=+0.106722618 container exec_died fa43ac72a8a6a2863fa517cbc53fe118714aa74f1d9b620c1e40de173c893c3c (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 05:36:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:36:28.928Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:36:28 np0005540825 podman[295948]: 2025-12-01 10:36:28.975798977 +0000 UTC m=+0.066090614 container exec 2e1200771a4f85a610f0f173c3c2000346e63d85e37d815d1d1db9886b52c917 (image=quay.io/ceph/grafana:10.4.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 05:36:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:36:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:36:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:36:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:29 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:36:29 np0005540825 podman[295948]: 2025-12-01 10:36:29.160498784 +0000 UTC m=+0.250790351 container exec_died 2e1200771a4f85a610f0f173c3c2000346e63d85e37d815d1d1db9886b52c917 (image=quay.io/ceph/grafana:10.4.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  1 05:36:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:36:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:36:29.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:36:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:36:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:36:29.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:36:29 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1382: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:36:29 np0005540825 podman[296060]: 2025-12-01 10:36:29.674461185 +0000 UTC m=+0.151082471 container exec f4d1dfb280c04c299aa8be4743fa19bf2fe3a6e302067b3bdeba477b91d1a552 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 05:36:29 np0005540825 podman[296060]: 2025-12-01 10:36:29.719475135 +0000 UTC m=+0.196096391 container exec_died f4d1dfb280c04c299aa8be4743fa19bf2fe3a6e302067b3bdeba477b91d1a552 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 05:36:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 05:36:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:36:29 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 05:36:29 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:36:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:36:30 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:36:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 05:36:30 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:36:30 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1383: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 557 B/s rd, 0 op/s
Dec  1 05:36:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 05:36:30 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:36:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 05:36:30 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:36:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 05:36:30 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 05:36:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 05:36:30 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:36:30 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:36:30 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:36:30 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:36:30 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:36:30 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:36:30 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:36:30 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:36:30 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:36:31 np0005540825 podman[296302]: 2025-12-01 10:36:31.130765803 +0000 UTC m=+0.051670839 container create 03b2448a573aa1b810d766697adf005e1d8ff1ef9717d6a7966b24fa327e29bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_carson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:36:31 np0005540825 systemd[1]: Started libpod-conmon-03b2448a573aa1b810d766697adf005e1d8ff1ef9717d6a7966b24fa327e29bf.scope.
Dec  1 05:36:31 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:36:31 np0005540825 podman[296302]: 2025-12-01 10:36:31.110162243 +0000 UTC m=+0.031067319 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:36:31 np0005540825 podman[296302]: 2025-12-01 10:36:31.209351909 +0000 UTC m=+0.130256935 container init 03b2448a573aa1b810d766697adf005e1d8ff1ef9717d6a7966b24fa327e29bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_carson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:36:31 np0005540825 podman[296302]: 2025-12-01 10:36:31.216731576 +0000 UTC m=+0.137636612 container start 03b2448a573aa1b810d766697adf005e1d8ff1ef9717d6a7966b24fa327e29bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_carson, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  1 05:36:31 np0005540825 podman[296302]: 2025-12-01 10:36:31.220018544 +0000 UTC m=+0.140923570 container attach 03b2448a573aa1b810d766697adf005e1d8ff1ef9717d6a7966b24fa327e29bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_carson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:36:31 np0005540825 frosty_carson[296318]: 167 167
Dec  1 05:36:31 np0005540825 systemd[1]: libpod-03b2448a573aa1b810d766697adf005e1d8ff1ef9717d6a7966b24fa327e29bf.scope: Deactivated successfully.
Dec  1 05:36:31 np0005540825 podman[296302]: 2025-12-01 10:36:31.221941685 +0000 UTC m=+0.142846681 container died 03b2448a573aa1b810d766697adf005e1d8ff1ef9717d6a7966b24fa327e29bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_carson, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  1 05:36:31 np0005540825 systemd[1]: var-lib-containers-storage-overlay-e13970703330f7f1fd59fdc0de2f0f664c88b7f7350c130ec67a26d0a354bd0d-merged.mount: Deactivated successfully.
Dec  1 05:36:31 np0005540825 podman[296302]: 2025-12-01 10:36:31.273010848 +0000 UTC m=+0.193915844 container remove 03b2448a573aa1b810d766697adf005e1d8ff1ef9717d6a7966b24fa327e29bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_carson, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 05:36:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:36:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:36:31.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:36:31 np0005540825 systemd[1]: libpod-conmon-03b2448a573aa1b810d766697adf005e1d8ff1ef9717d6a7966b24fa327e29bf.scope: Deactivated successfully.
Dec  1 05:36:31 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:36:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:36:31] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec  1 05:36:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:36:31] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Dec  1 05:36:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:31 np0005540825 podman[296343]: 2025-12-01 10:36:31.506759483 +0000 UTC m=+0.051132075 container create 15574f6b2316d8b5050ea1fdcefa3646ac6fa2ca61b2d9fdaf2e4e3ec292cfa8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_keller, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:36:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:36:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:36:31.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:36:31 np0005540825 systemd[1]: Started libpod-conmon-15574f6b2316d8b5050ea1fdcefa3646ac6fa2ca61b2d9fdaf2e4e3ec292cfa8.scope.
Dec  1 05:36:31 np0005540825 podman[296343]: 2025-12-01 10:36:31.485340442 +0000 UTC m=+0.029713074 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:36:31 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:36:31 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f736f2da193435a1caf04c03db8961395a3951b30b41f4f4b59838dc4bc45bbc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:36:31 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f736f2da193435a1caf04c03db8961395a3951b30b41f4f4b59838dc4bc45bbc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:36:31 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f736f2da193435a1caf04c03db8961395a3951b30b41f4f4b59838dc4bc45bbc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:36:31 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f736f2da193435a1caf04c03db8961395a3951b30b41f4f4b59838dc4bc45bbc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:36:31 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f736f2da193435a1caf04c03db8961395a3951b30b41f4f4b59838dc4bc45bbc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:36:31 np0005540825 podman[296343]: 2025-12-01 10:36:31.610215613 +0000 UTC m=+0.154588225 container init 15574f6b2316d8b5050ea1fdcefa3646ac6fa2ca61b2d9fdaf2e4e3ec292cfa8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_keller, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  1 05:36:31 np0005540825 podman[296343]: 2025-12-01 10:36:31.616277125 +0000 UTC m=+0.160649707 container start 15574f6b2316d8b5050ea1fdcefa3646ac6fa2ca61b2d9fdaf2e4e3ec292cfa8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_keller, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:36:31 np0005540825 podman[296343]: 2025-12-01 10:36:31.619658015 +0000 UTC m=+0.164030607 container attach 15574f6b2316d8b5050ea1fdcefa3646ac6fa2ca61b2d9fdaf2e4e3ec292cfa8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_keller, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec  1 05:36:31 np0005540825 angry_keller[296360]: --> passed data devices: 0 physical, 1 LVM
Dec  1 05:36:31 np0005540825 angry_keller[296360]: --> All data devices are unavailable
Dec  1 05:36:31 np0005540825 systemd[1]: libpod-15574f6b2316d8b5050ea1fdcefa3646ac6fa2ca61b2d9fdaf2e4e3ec292cfa8.scope: Deactivated successfully.
Dec  1 05:36:31 np0005540825 podman[296343]: 2025-12-01 10:36:31.964672409 +0000 UTC m=+0.509045031 container died 15574f6b2316d8b5050ea1fdcefa3646ac6fa2ca61b2d9fdaf2e4e3ec292cfa8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_keller, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Dec  1 05:36:31 np0005540825 systemd[1]: var-lib-containers-storage-overlay-f736f2da193435a1caf04c03db8961395a3951b30b41f4f4b59838dc4bc45bbc-merged.mount: Deactivated successfully.
Dec  1 05:36:32 np0005540825 podman[296343]: 2025-12-01 10:36:32.015812723 +0000 UTC m=+0.560185305 container remove 15574f6b2316d8b5050ea1fdcefa3646ac6fa2ca61b2d9fdaf2e4e3ec292cfa8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_keller, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:36:32 np0005540825 systemd[1]: libpod-conmon-15574f6b2316d8b5050ea1fdcefa3646ac6fa2ca61b2d9fdaf2e4e3ec292cfa8.scope: Deactivated successfully.
Dec  1 05:36:32 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1384: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 557 B/s rd, 0 op/s
Dec  1 05:36:32 np0005540825 podman[296480]: 2025-12-01 10:36:32.65972852 +0000 UTC m=+0.046811799 container create cac29cfea584726e50b3c98f10db6de085cfccfa3045874e0303af1a7e7738e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:36:32 np0005540825 systemd[1]: Started libpod-conmon-cac29cfea584726e50b3c98f10db6de085cfccfa3045874e0303af1a7e7738e3.scope.
Dec  1 05:36:32 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:36:32 np0005540825 podman[296480]: 2025-12-01 10:36:32.639781248 +0000 UTC m=+0.026864447 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:36:32 np0005540825 podman[296480]: 2025-12-01 10:36:32.737787813 +0000 UTC m=+0.124871002 container init cac29cfea584726e50b3c98f10db6de085cfccfa3045874e0303af1a7e7738e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  1 05:36:32 np0005540825 podman[296480]: 2025-12-01 10:36:32.745819557 +0000 UTC m=+0.132902736 container start cac29cfea584726e50b3c98f10db6de085cfccfa3045874e0303af1a7e7738e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:36:32 np0005540825 podman[296480]: 2025-12-01 10:36:32.749189007 +0000 UTC m=+0.136272206 container attach cac29cfea584726e50b3c98f10db6de085cfccfa3045874e0303af1a7e7738e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_mcnulty, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  1 05:36:32 np0005540825 cranky_mcnulty[296496]: 167 167
Dec  1 05:36:32 np0005540825 systemd[1]: libpod-cac29cfea584726e50b3c98f10db6de085cfccfa3045874e0303af1a7e7738e3.scope: Deactivated successfully.
Dec  1 05:36:32 np0005540825 podman[296480]: 2025-12-01 10:36:32.751122678 +0000 UTC m=+0.138205867 container died cac29cfea584726e50b3c98f10db6de085cfccfa3045874e0303af1a7e7738e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_mcnulty, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec  1 05:36:32 np0005540825 systemd[1]: var-lib-containers-storage-overlay-e9e828488e017551cbfa6b1827accc4e31296b67b607ddd147da74dd747592c7-merged.mount: Deactivated successfully.
Dec  1 05:36:32 np0005540825 podman[296480]: 2025-12-01 10:36:32.791478285 +0000 UTC m=+0.178561504 container remove cac29cfea584726e50b3c98f10db6de085cfccfa3045874e0303af1a7e7738e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:36:32 np0005540825 systemd[1]: libpod-conmon-cac29cfea584726e50b3c98f10db6de085cfccfa3045874e0303af1a7e7738e3.scope: Deactivated successfully.
Dec  1 05:36:32 np0005540825 podman[296522]: 2025-12-01 10:36:32.954071462 +0000 UTC m=+0.041680133 container create ed82d1c713b5019afe06a9ae4e322b89086dfd177bfc9cd98121da1340b68368 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_saha, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  1 05:36:33 np0005540825 systemd[1]: Started libpod-conmon-ed82d1c713b5019afe06a9ae4e322b89086dfd177bfc9cd98121da1340b68368.scope.
Dec  1 05:36:33 np0005540825 podman[296522]: 2025-12-01 10:36:32.935009724 +0000 UTC m=+0.022618415 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:36:33 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:36:33 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53816f017660eeb4c56ead2a01cde678276207c128afeb637953f11122bade22/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:36:33 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53816f017660eeb4c56ead2a01cde678276207c128afeb637953f11122bade22/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:36:33 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53816f017660eeb4c56ead2a01cde678276207c128afeb637953f11122bade22/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:36:33 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53816f017660eeb4c56ead2a01cde678276207c128afeb637953f11122bade22/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:36:33 np0005540825 podman[296522]: 2025-12-01 10:36:33.055135618 +0000 UTC m=+0.142744349 container init ed82d1c713b5019afe06a9ae4e322b89086dfd177bfc9cd98121da1340b68368 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_saha, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  1 05:36:33 np0005540825 podman[296522]: 2025-12-01 10:36:33.06907187 +0000 UTC m=+0.156680541 container start ed82d1c713b5019afe06a9ae4e322b89086dfd177bfc9cd98121da1340b68368 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_saha, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:36:33 np0005540825 podman[296522]: 2025-12-01 10:36:33.072614905 +0000 UTC m=+0.160223656 container attach ed82d1c713b5019afe06a9ae4e322b89086dfd177bfc9cd98121da1340b68368 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_saha, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  1 05:36:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:36:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:36:33.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:36:33 np0005540825 dreamy_saha[296539]: {
Dec  1 05:36:33 np0005540825 dreamy_saha[296539]:    "1": [
Dec  1 05:36:33 np0005540825 dreamy_saha[296539]:        {
Dec  1 05:36:33 np0005540825 dreamy_saha[296539]:            "devices": [
Dec  1 05:36:33 np0005540825 dreamy_saha[296539]:                "/dev/loop3"
Dec  1 05:36:33 np0005540825 dreamy_saha[296539]:            ],
Dec  1 05:36:33 np0005540825 dreamy_saha[296539]:            "lv_name": "ceph_lv0",
Dec  1 05:36:33 np0005540825 dreamy_saha[296539]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:36:33 np0005540825 dreamy_saha[296539]:            "lv_size": "21470642176",
Dec  1 05:36:33 np0005540825 dreamy_saha[296539]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 05:36:33 np0005540825 dreamy_saha[296539]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:36:33 np0005540825 dreamy_saha[296539]:            "name": "ceph_lv0",
Dec  1 05:36:33 np0005540825 dreamy_saha[296539]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:36:33 np0005540825 dreamy_saha[296539]:            "tags": {
Dec  1 05:36:33 np0005540825 dreamy_saha[296539]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:36:33 np0005540825 dreamy_saha[296539]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:36:33 np0005540825 dreamy_saha[296539]:                "ceph.cephx_lockbox_secret": "",
Dec  1 05:36:33 np0005540825 dreamy_saha[296539]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 05:36:33 np0005540825 dreamy_saha[296539]:                "ceph.cluster_name": "ceph",
Dec  1 05:36:33 np0005540825 dreamy_saha[296539]:                "ceph.crush_device_class": "",
Dec  1 05:36:33 np0005540825 dreamy_saha[296539]:                "ceph.encrypted": "0",
Dec  1 05:36:33 np0005540825 dreamy_saha[296539]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 05:36:33 np0005540825 dreamy_saha[296539]:                "ceph.osd_id": "1",
Dec  1 05:36:33 np0005540825 dreamy_saha[296539]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 05:36:33 np0005540825 dreamy_saha[296539]:                "ceph.type": "block",
Dec  1 05:36:33 np0005540825 dreamy_saha[296539]:                "ceph.vdo": "0",
Dec  1 05:36:33 np0005540825 dreamy_saha[296539]:                "ceph.with_tpm": "0"
Dec  1 05:36:33 np0005540825 dreamy_saha[296539]:            },
Dec  1 05:36:33 np0005540825 dreamy_saha[296539]:            "type": "block",
Dec  1 05:36:33 np0005540825 dreamy_saha[296539]:            "vg_name": "ceph_vg0"
Dec  1 05:36:33 np0005540825 dreamy_saha[296539]:        }
Dec  1 05:36:33 np0005540825 dreamy_saha[296539]:    ]
Dec  1 05:36:33 np0005540825 dreamy_saha[296539]: }
Dec  1 05:36:33 np0005540825 systemd[1]: libpod-ed82d1c713b5019afe06a9ae4e322b89086dfd177bfc9cd98121da1340b68368.scope: Deactivated successfully.
Dec  1 05:36:33 np0005540825 podman[296522]: 2025-12-01 10:36:33.374888597 +0000 UTC m=+0.462497338 container died ed82d1c713b5019afe06a9ae4e322b89086dfd177bfc9cd98121da1340b68368 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_saha, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec  1 05:36:33 np0005540825 nova_compute[256151]: 2025-12-01 10:36:33.428 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:36:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:36:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:36:33.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:36:33 np0005540825 nova_compute[256151]: 2025-12-01 10:36:33.538 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:36:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:36:33.798Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:36:33 np0005540825 systemd[1]: var-lib-containers-storage-overlay-53816f017660eeb4c56ead2a01cde678276207c128afeb637953f11122bade22-merged.mount: Deactivated successfully.
Dec  1 05:36:33 np0005540825 podman[296522]: 2025-12-01 10:36:33.99500158 +0000 UTC m=+1.082610291 container remove ed82d1c713b5019afe06a9ae4e322b89086dfd177bfc9cd98121da1340b68368 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_saha, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  1 05:36:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:36:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:36:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:36:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:34 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:36:34 np0005540825 systemd[1]: libpod-conmon-ed82d1c713b5019afe06a9ae4e322b89086dfd177bfc9cd98121da1340b68368.scope: Deactivated successfully.
Dec  1 05:36:34 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1385: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 557 B/s rd, 0 op/s
Dec  1 05:36:34 np0005540825 podman[296655]: 2025-12-01 10:36:34.614941338 +0000 UTC m=+0.028244175 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:36:34 np0005540825 podman[296655]: 2025-12-01 10:36:34.781197513 +0000 UTC m=+0.194500300 container create 68157b82c454a47469ba8df77db33e072931b60fe2d12838d6ef20b78033e889 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_darwin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:36:34 np0005540825 systemd[1]: Started libpod-conmon-68157b82c454a47469ba8df77db33e072931b60fe2d12838d6ef20b78033e889.scope.
Dec  1 05:36:34 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:36:35 np0005540825 podman[296655]: 2025-12-01 10:36:35.048648818 +0000 UTC m=+0.461951625 container init 68157b82c454a47469ba8df77db33e072931b60fe2d12838d6ef20b78033e889 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_darwin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 05:36:35 np0005540825 podman[296655]: 2025-12-01 10:36:35.056703732 +0000 UTC m=+0.470006489 container start 68157b82c454a47469ba8df77db33e072931b60fe2d12838d6ef20b78033e889 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_darwin, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  1 05:36:35 np0005540825 competent_darwin[296682]: 167 167
Dec  1 05:36:35 np0005540825 systemd[1]: libpod-68157b82c454a47469ba8df77db33e072931b60fe2d12838d6ef20b78033e889.scope: Deactivated successfully.
Dec  1 05:36:35 np0005540825 podman[296655]: 2025-12-01 10:36:35.128414365 +0000 UTC m=+0.541717202 container attach 68157b82c454a47469ba8df77db33e072931b60fe2d12838d6ef20b78033e889 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_darwin, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:36:35 np0005540825 podman[296655]: 2025-12-01 10:36:35.129055662 +0000 UTC m=+0.542358499 container died 68157b82c454a47469ba8df77db33e072931b60fe2d12838d6ef20b78033e889 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_darwin, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:36:35 np0005540825 podman[296669]: 2025-12-01 10:36:35.192407473 +0000 UTC m=+0.357385435 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 05:36:35 np0005540825 systemd[1]: var-lib-containers-storage-overlay-2f88c35ee6837af3177f5f17e147b5256d184e81e14287f9fdec89c64ed0b6e4-merged.mount: Deactivated successfully.
Dec  1 05:36:35 np0005540825 podman[296655]: 2025-12-01 10:36:35.231882066 +0000 UTC m=+0.645184823 container remove 68157b82c454a47469ba8df77db33e072931b60fe2d12838d6ef20b78033e889 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_darwin, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:36:35 np0005540825 systemd[1]: libpod-conmon-68157b82c454a47469ba8df77db33e072931b60fe2d12838d6ef20b78033e889.scope: Deactivated successfully.
Dec  1 05:36:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:36:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:36:35.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:36:35 np0005540825 podman[296723]: 2025-12-01 10:36:35.433287408 +0000 UTC m=+0.042775282 container create 632c43e96928808e5ea7ca1ba00b64d5fe1b16735ff04afe9cd46aafec7de8fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_vaughan, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:36:35 np0005540825 systemd[1]: Started libpod-conmon-632c43e96928808e5ea7ca1ba00b64d5fe1b16735ff04afe9cd46aafec7de8fa.scope.
Dec  1 05:36:35 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:36:35 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10aa42ed4cb9ca85ff652b0436f7db2e9f9d5065c6dc91eae075fbfdd532d9da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:36:35 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10aa42ed4cb9ca85ff652b0436f7db2e9f9d5065c6dc91eae075fbfdd532d9da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:36:35 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10aa42ed4cb9ca85ff652b0436f7db2e9f9d5065c6dc91eae075fbfdd532d9da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:36:35 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10aa42ed4cb9ca85ff652b0436f7db2e9f9d5065c6dc91eae075fbfdd532d9da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:36:35 np0005540825 podman[296723]: 2025-12-01 10:36:35.500330037 +0000 UTC m=+0.109817931 container init 632c43e96928808e5ea7ca1ba00b64d5fe1b16735ff04afe9cd46aafec7de8fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec  1 05:36:35 np0005540825 podman[296723]: 2025-12-01 10:36:35.508892815 +0000 UTC m=+0.118380689 container start 632c43e96928808e5ea7ca1ba00b64d5fe1b16735ff04afe9cd46aafec7de8fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec  1 05:36:35 np0005540825 podman[296723]: 2025-12-01 10:36:35.416745927 +0000 UTC m=+0.026233821 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:36:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:36:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:36:35.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:36:35 np0005540825 podman[296723]: 2025-12-01 10:36:35.512498781 +0000 UTC m=+0.121986685 container attach 632c43e96928808e5ea7ca1ba00b64d5fe1b16735ff04afe9cd46aafec7de8fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_vaughan, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:36:36 np0005540825 lvm[296816]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 05:36:36 np0005540825 lvm[296816]: VG ceph_vg0 finished
Dec  1 05:36:36 np0005540825 suspicious_vaughan[296740]: {}
Dec  1 05:36:36 np0005540825 systemd[1]: libpod-632c43e96928808e5ea7ca1ba00b64d5fe1b16735ff04afe9cd46aafec7de8fa.scope: Deactivated successfully.
Dec  1 05:36:36 np0005540825 podman[296723]: 2025-12-01 10:36:36.2471969 +0000 UTC m=+0.856684794 container died 632c43e96928808e5ea7ca1ba00b64d5fe1b16735ff04afe9cd46aafec7de8fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  1 05:36:36 np0005540825 systemd[1]: libpod-632c43e96928808e5ea7ca1ba00b64d5fe1b16735ff04afe9cd46aafec7de8fa.scope: Consumed 1.100s CPU time.
Dec  1 05:36:36 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:36:36 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1386: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 836 B/s rd, 0 op/s
Dec  1 05:36:37 np0005540825 systemd[1]: var-lib-containers-storage-overlay-10aa42ed4cb9ca85ff652b0436f7db2e9f9d5065c6dc91eae075fbfdd532d9da-merged.mount: Deactivated successfully.
Dec  1 05:36:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:36:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:36:37.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:36:37 np0005540825 podman[296723]: 2025-12-01 10:36:37.30055877 +0000 UTC m=+1.910046644 container remove 632c43e96928808e5ea7ca1ba00b64d5fe1b16735ff04afe9cd46aafec7de8fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  1 05:36:37 np0005540825 systemd[1]: libpod-conmon-632c43e96928808e5ea7ca1ba00b64d5fe1b16735ff04afe9cd46aafec7de8fa.scope: Deactivated successfully.
Dec  1 05:36:37 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 05:36:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:36:37.420Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:36:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:36:37.422Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:36:37 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:36:37 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 05:36:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:36:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:36:37.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:36:37 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:36:38 np0005540825 nova_compute[256151]: 2025-12-01 10:36:38.429 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:36:38 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:36:38 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:36:38 np0005540825 nova_compute[256151]: 2025-12-01 10:36:38.540 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:36:38 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1387: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 557 B/s rd, 0 op/s
Dec  1 05:36:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:36:38.929Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:36:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:36:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:36:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:36:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:39 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:36:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:36:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:36:39.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:36:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:36:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:36:39.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:36:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_10:36:39
Dec  1 05:36:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 05:36:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 05:36:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', '.mgr', 'backups', 'default.rgw.meta', 'images', '.rgw.root', 'vms', 'default.rgw.log', '.nfs']
Dec  1 05:36:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 05:36:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:36:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:36:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:36:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:36:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:36:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:36:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:36:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:36:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 05:36:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:36:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:36:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:36:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:36:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:36:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:36:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:36:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:36:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:36:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  1 05:36:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:36:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:36:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:36:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:36:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:36:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:36:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:36:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:36:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:36:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:36:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:36:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:36:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:36:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:36:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 05:36:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:36:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 05:36:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:36:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:36:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:36:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:36:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:36:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:36:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:36:40 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1388: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 557 B/s rd, 0 op/s
Dec  1 05:36:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:36:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:36:41.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:36:41 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:36:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:36:41] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec  1 05:36:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:36:41] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec  1 05:36:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:36:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:36:41.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:36:42 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1389: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:36:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:36:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:36:43.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:36:43 np0005540825 nova_compute[256151]: 2025-12-01 10:36:43.431 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:36:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:36:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:36:43.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:36:43 np0005540825 nova_compute[256151]: 2025-12-01 10:36:43.541 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:36:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:36:43.799Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:36:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:44 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:36:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:44 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:36:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:44 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:36:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:44 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:36:44 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1390: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:36:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:36:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:36:45.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:36:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:36:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:36:45.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:36:46 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:36:46 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1391: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:36:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:36:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:36:47.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:36:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:36:47.424Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:36:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:36:47.424Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:36:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:36:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:36:47.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:36:48 np0005540825 nova_compute[256151]: 2025-12-01 10:36:48.432 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:36:48 np0005540825 nova_compute[256151]: 2025-12-01 10:36:48.543 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:36:48 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1392: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:36:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:36:48.931Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:36:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:36:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:49 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:36:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:49 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:36:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:49 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:36:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:36:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:36:49.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:36:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:36:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:36:49.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:36:50 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1393: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:36:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:36:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:36:51.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:36:51 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:36:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:36:51] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec  1 05:36:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:36:51] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec  1 05:36:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:36:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:36:51.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:36:52 np0005540825 podman[296899]: 2025-12-01 10:36:52.216545843 +0000 UTC m=+0.084088504 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 05:36:52 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1394: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:36:52 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  1 05:36:52 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.0 total, 600.0 interval#012Cumulative writes: 8713 writes, 38K keys, 8712 commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s#012Cumulative WAL: 8713 writes, 8712 syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1545 writes, 7174 keys, 1544 commit groups, 1.0 writes per commit group, ingest: 11.85 MB, 0.02 MB/s#012Interval WAL: 1545 writes, 1544 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     53.7      1.12              0.21        23    0.049       0      0       0.0       0.0#012  L6      1/0   14.21 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.8     99.3     85.8      3.39              0.90        22    0.154    130K    12K       0.0       0.0#012 Sum      1/0   14.21 MB   0.0      0.3     0.1      0.3       0.3      0.1       0.0   5.8     74.7     77.8      4.50              1.11        45    0.100    130K    12K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.9    125.8    126.3      0.61              0.21        10    0.061     36K   3096       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0     99.3     85.8      3.39              0.90        22    0.154    130K    12K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     53.9      1.11              0.21        22    0.051       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3000.0 total, 600.0 interval#012Flush(GB): cumulative 0.059, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.34 GB write, 0.12 MB/s write, 0.33 GB read, 0.11 MB/s read, 4.5 seconds#012Interval compaction: 0.07 GB write, 0.13 MB/s write, 0.07 GB read, 0.13 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563970129350#2 capacity: 304.00 MB usage: 28.70 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000377 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1735,27.71 MB,9.11452%) FilterBlock(46,381.36 KB,0.122507%) IndexBlock(46,632.14 KB,0.203067%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  1 05:36:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:36:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:36:53.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:36:53 np0005540825 nova_compute[256151]: 2025-12-01 10:36:53.434 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:36:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:36:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:36:53.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:36:53 np0005540825 nova_compute[256151]: 2025-12-01 10:36:53.544 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:36:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:36:53.800Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:36:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:36:53.800Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:36:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:36:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:54 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:36:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:54 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:36:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:54 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:36:54 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1395: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:36:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:36:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:36:55 np0005540825 podman[296920]: 2025-12-01 10:36:55.206695438 +0000 UTC m=+0.064818360 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 05:36:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:36:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:36:55.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:36:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:36:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:36:55.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:36:56 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:36:56 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1396: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:36:57 np0005540825 nova_compute[256151]: 2025-12-01 10:36:57.029 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:36:57 np0005540825 nova_compute[256151]: 2025-12-01 10:36:57.029 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 05:36:57 np0005540825 nova_compute[256151]: 2025-12-01 10:36:57.029 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 05:36:57 np0005540825 nova_compute[256151]: 2025-12-01 10:36:57.049 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 05:36:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:36:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:36:57.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:36:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:36:57.425Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:36:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:36:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:36:57.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:36:58 np0005540825 nova_compute[256151]: 2025-12-01 10:36:58.436 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:36:58 np0005540825 nova_compute[256151]: 2025-12-01 10:36:58.547 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:36:58 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1397: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:36:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:36:58.932Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:36:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:36:58.932Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:36:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:36:58.933Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec  1 05:36:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:36:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:36:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:36:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:36:59 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:36:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:36:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:36:59.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:36:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:36:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:36:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:36:59.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:37:00 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1398: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:37:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:37:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:37:01.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:37:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:37:01] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec  1 05:37:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:37:01] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec  1 05:37:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:37:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:37:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:37:01.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:37:02 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1399: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:37:03 np0005540825 nova_compute[256151]: 2025-12-01 10:37:03.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:37:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:37:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:37:03.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:37:03 np0005540825 nova_compute[256151]: 2025-12-01 10:37:03.438 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:37:03 np0005540825 nova_compute[256151]: 2025-12-01 10:37:03.550 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:37:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:37:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:37:03.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:37:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:37:03.801Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:37:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:37:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:37:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:37:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:04 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:37:04 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1400: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:37:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:37:04.596 163291 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:37:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:37:04.597 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:37:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:37:04.597 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:37:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:37:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:37:05.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:37:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:37:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:37:05.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:37:06 np0005540825 podman[296952]: 2025-12-01 10:37:06.217984644 +0000 UTC m=+0.085554573 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  1 05:37:06 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:37:06 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1401: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:37:06 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Dec  1 05:37:06 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:37:06.774610) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  1 05:37:06 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Dec  1 05:37:06 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764585426774659, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 1367, "num_deletes": 251, "total_data_size": 2623301, "memory_usage": 2651296, "flush_reason": "Manual Compaction"}
Dec  1 05:37:06 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Dec  1 05:37:06 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764585426824492, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 2533730, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38099, "largest_seqno": 39465, "table_properties": {"data_size": 2527310, "index_size": 3619, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13565, "raw_average_key_size": 20, "raw_value_size": 2514458, "raw_average_value_size": 3714, "num_data_blocks": 158, "num_entries": 677, "num_filter_entries": 677, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764585297, "oldest_key_time": 1764585297, "file_creation_time": 1764585426, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Dec  1 05:37:06 np0005540825 ceph-mon[74416]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 49927 microseconds, and 11087 cpu microseconds.
Dec  1 05:37:06 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 05:37:06 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:37:06.824538) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 2533730 bytes OK
Dec  1 05:37:06 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:37:06.824558) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Dec  1 05:37:06 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:37:06.827332) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Dec  1 05:37:06 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:37:06.827348) EVENT_LOG_v1 {"time_micros": 1764585426827342, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  1 05:37:06 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:37:06.827366) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  1 05:37:06 np0005540825 ceph-mon[74416]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 2617393, prev total WAL file size 2618070, number of live WAL files 2.
Dec  1 05:37:06 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:37:06 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:37:06.828557) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Dec  1 05:37:06 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  1 05:37:06 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(2474KB)], [83(14MB)]
Dec  1 05:37:06 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764585426828624, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 17433846, "oldest_snapshot_seqno": -1}
Dec  1 05:37:07 np0005540825 nova_compute[256151]: 2025-12-01 10:37:07.023 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:37:07 np0005540825 ceph-mon[74416]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 6937 keys, 15160939 bytes, temperature: kUnknown
Dec  1 05:37:07 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764585427099268, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 15160939, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15115327, "index_size": 27196, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17349, "raw_key_size": 182775, "raw_average_key_size": 26, "raw_value_size": 14991086, "raw_average_value_size": 2161, "num_data_blocks": 1068, "num_entries": 6937, "num_filter_entries": 6937, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764582410, "oldest_key_time": 0, "file_creation_time": 1764585426, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23cec031-3abb-406f-b210-f97462e45ae8", "db_session_id": "WQRU59OV9V8EC0IMYNIX", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Dec  1 05:37:07 np0005540825 ceph-mon[74416]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  1 05:37:07 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:37:07.099642) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 15160939 bytes
Dec  1 05:37:07 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:37:07.102638) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 64.4 rd, 56.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 14.2 +0.0 blob) out(14.5 +0.0 blob), read-write-amplify(12.9) write-amplify(6.0) OK, records in: 7453, records dropped: 516 output_compression: NoCompression
Dec  1 05:37:07 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:37:07.102693) EVENT_LOG_v1 {"time_micros": 1764585427102673, "job": 48, "event": "compaction_finished", "compaction_time_micros": 270822, "compaction_time_cpu_micros": 35471, "output_level": 6, "num_output_files": 1, "total_output_size": 15160939, "num_input_records": 7453, "num_output_records": 6937, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  1 05:37:07 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:37:07 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764585427104081, "job": 48, "event": "table_file_deletion", "file_number": 85}
Dec  1 05:37:07 np0005540825 ceph-mon[74416]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  1 05:37:07 np0005540825 ceph-mon[74416]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764585427107573, "job": 48, "event": "table_file_deletion", "file_number": 83}
Dec  1 05:37:07 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:37:06.828384) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:37:07 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:37:07.107643) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:37:07 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:37:07.107649) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:37:07 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:37:07.107651) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:37:07 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:37:07.107653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:37:07 np0005540825 ceph-mon[74416]: rocksdb: (Original Log Time 2025/12/01-10:37:07.107655) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  1 05:37:07 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:07 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:37:07 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:37:07.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:37:07 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:37:07.426Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:37:07 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:07 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:37:07 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:37:07.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:37:08 np0005540825 nova_compute[256151]: 2025-12-01 10:37:08.442 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:37:08 np0005540825 nova_compute[256151]: 2025-12-01 10:37:08.552 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:37:08 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1402: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:37:08 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:37:08.934Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:37:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:37:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:37:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:08 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:37:09 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:09 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:37:09 np0005540825 nova_compute[256151]: 2025-12-01 10:37:09.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:37:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:37:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:37:09.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:37:09 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:09 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:37:09 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:37:09.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:37:09 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:37:09 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:37:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:37:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:37:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:37:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:37:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:37:09 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:37:10 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1403: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:37:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:37:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:37:11.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:37:11 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:37:11] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec  1 05:37:11 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:37:11] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec  1 05:37:11 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:37:11 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:11 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:37:11 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:37:11.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:37:12 np0005540825 nova_compute[256151]: 2025-12-01 10:37:12.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:37:12 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1404: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:37:13 np0005540825 nova_compute[256151]: 2025-12-01 10:37:13.026 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:37:13 np0005540825 nova_compute[256151]: 2025-12-01 10:37:13.048 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:37:13 np0005540825 nova_compute[256151]: 2025-12-01 10:37:13.048 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:37:13 np0005540825 nova_compute[256151]: 2025-12-01 10:37:13.049 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:37:13 np0005540825 nova_compute[256151]: 2025-12-01 10:37:13.049 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 05:37:13 np0005540825 nova_compute[256151]: 2025-12-01 10:37:13.049 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:37:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:37:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:37:13.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:37:13 np0005540825 nova_compute[256151]: 2025-12-01 10:37:13.444 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:37:13 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:37:13 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/806659210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:37:13 np0005540825 nova_compute[256151]: 2025-12-01 10:37:13.553 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:37:13 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:13 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:37:13 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:37:13.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:37:13 np0005540825 nova_compute[256151]: 2025-12-01 10:37:13.570 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:37:13 np0005540825 nova_compute[256151]: 2025-12-01 10:37:13.744 256155 WARNING nova.virt.libvirt.driver [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 05:37:13 np0005540825 nova_compute[256151]: 2025-12-01 10:37:13.745 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4509MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 05:37:13 np0005540825 nova_compute[256151]: 2025-12-01 10:37:13.745 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:37:13 np0005540825 nova_compute[256151]: 2025-12-01 10:37:13.746 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:37:13 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:37:13.802Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:37:13 np0005540825 nova_compute[256151]: 2025-12-01 10:37:13.821 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 05:37:13 np0005540825 nova_compute[256151]: 2025-12-01 10:37:13.822 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 05:37:13 np0005540825 nova_compute[256151]: 2025-12-01 10:37:13.848 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 05:37:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:37:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:37:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:13 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:37:14 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:14 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:37:14 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  1 05:37:14 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1073295087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  1 05:37:14 np0005540825 nova_compute[256151]: 2025-12-01 10:37:14.348 256155 DEBUG oslo_concurrency.processutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 05:37:14 np0005540825 nova_compute[256151]: 2025-12-01 10:37:14.355 256155 DEBUG nova.compute.provider_tree [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed in ProviderTree for provider: 5efe20fe-1981-4bd9-8786-d9fddc89a5ae update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 05:37:14 np0005540825 nova_compute[256151]: 2025-12-01 10:37:14.374 256155 DEBUG nova.scheduler.client.report [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Inventory has not changed for provider 5efe20fe-1981-4bd9-8786-d9fddc89a5ae based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 05:37:14 np0005540825 nova_compute[256151]: 2025-12-01 10:37:14.376 256155 DEBUG nova.compute.resource_tracker [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 05:37:14 np0005540825 nova_compute[256151]: 2025-12-01 10:37:14.377 256155 DEBUG oslo_concurrency.lockutils [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:37:14 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1405: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:37:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:37:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:37:15.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:37:15 np0005540825 nova_compute[256151]: 2025-12-01 10:37:15.378 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:37:15 np0005540825 nova_compute[256151]: 2025-12-01 10:37:15.379 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 05:37:15 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:15 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:37:15 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:37:15.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:37:16 np0005540825 nova_compute[256151]: 2025-12-01 10:37:16.028 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:37:16 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:37:16 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1406: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:37:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:37:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:37:17.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:37:17 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:37:17.428Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:37:17 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:17 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:37:17 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:37:17.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:37:18 np0005540825 nova_compute[256151]: 2025-12-01 10:37:18.447 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:37:18 np0005540825 nova_compute[256151]: 2025-12-01 10:37:18.554 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:37:18 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1407: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:37:18 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:37:18.935Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:37:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:37:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:37:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:18 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:37:19 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:19 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:37:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:37:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:37:19.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:37:19 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:19 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:37:19 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:37:19.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:37:20 np0005540825 nova_compute[256151]: 2025-12-01 10:37:20.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:37:20 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1408: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:37:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.003000080s ======
Dec  1 05:37:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:37:21.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Dec  1 05:37:21 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:37:21] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec  1 05:37:21 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:37:21] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec  1 05:37:21 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:37:21 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:21 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:37:21 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:37:21.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:37:22 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1409: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:37:23 np0005540825 podman[297067]: 2025-12-01 10:37:23.196591739 +0000 UTC m=+0.056448007 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Dec  1 05:37:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:37:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:37:23.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:37:23 np0005540825 nova_compute[256151]: 2025-12-01 10:37:23.451 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:37:23 np0005540825 nova_compute[256151]: 2025-12-01 10:37:23.556 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:37:23 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:23 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:37:23 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:37:23.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:37:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:37:23.802Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:37:23 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:37:23.803Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:37:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:37:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:23 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:37:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:24 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:37:24 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:24 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:37:24 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1410: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:37:24 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:37:24 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:37:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:37:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:37:25.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:37:25 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:25 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:37:25 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:37:25.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:37:26 np0005540825 podman[297092]: 2025-12-01 10:37:26.207156259 +0000 UTC m=+0.075409002 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:37:26 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:37:26 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1411: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:37:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:37:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:37:27.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:37:27 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:37:27.429Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:37:27 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:27 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:37:27 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:37:27.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:37:28 np0005540825 nova_compute[256151]: 2025-12-01 10:37:28.452 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:37:28 np0005540825 nova_compute[256151]: 2025-12-01 10:37:28.558 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:37:28 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1412: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:37:28 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:37:28.937Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:37:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:37:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:28 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:37:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:29 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:37:29 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:29 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:37:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:37:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:37:29.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:37:29 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:29 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:37:29 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:37:29.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:37:30 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1413: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:37:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:37:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:37:31.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:37:31 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:37:31] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Dec  1 05:37:31 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:37:31] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Dec  1 05:37:31 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:37:31 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:31 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:37:31 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:37:31.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:37:32 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1414: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:37:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:37:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:37:33.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:37:33 np0005540825 nova_compute[256151]: 2025-12-01 10:37:33.454 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:37:33 np0005540825 nova_compute[256151]: 2025-12-01 10:37:33.560 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:37:33 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:33 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:37:33 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:37:33.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:37:33 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:37:33.804Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:37:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:37:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:37:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:33 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:37:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:34 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:37:34 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1415: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:37:34 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-crash-compute-0[79836]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Dec  1 05:37:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:37:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:37:35.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:37:35 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:35 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:37:35 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:37:35.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:37:36 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:37:36 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1416: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:37:37 np0005540825 podman[297147]: 2025-12-01 10:37:37.210161638 +0000 UTC m=+0.078669680 container health_status 976a071544b02342852eb75fb5853d5620793eb24a8dda1e507b8c95aa68ddbf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:37:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:37:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:37:37.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:37:37 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:37:37.430Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:37:37 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:37 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:37:37 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:37:37.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:37:38 np0005540825 nova_compute[256151]: 2025-12-01 10:37:38.458 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:37:38 np0005540825 nova_compute[256151]: 2025-12-01 10:37:38.562 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:37:38 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1417: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:37:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:37:38 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:37:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  1 05:37:38 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:37:38 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1418: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 610 B/s rd, 0 op/s
Dec  1 05:37:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  1 05:37:38 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:37:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  1 05:37:38 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:37:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  1 05:37:38 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  1 05:37:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  1 05:37:38 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:37:38 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:37:38 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:37:38 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  1 05:37:38 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:37:38 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:37:38 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  1 05:37:38 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:37:38.938Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:37:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:37:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:37:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:38 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:37:39 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:39 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:37:39 np0005540825 podman[297349]: 2025-12-01 10:37:39.196541477 +0000 UTC m=+0.052186833 container create 056ddee50800e135bf5dfd49af2fa71bf510c4cc0926cda536e77db6268b16ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_vaughan, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  1 05:37:39 np0005540825 systemd[1]: Started libpod-conmon-056ddee50800e135bf5dfd49af2fa71bf510c4cc0926cda536e77db6268b16ae.scope.
Dec  1 05:37:39 np0005540825 podman[297349]: 2025-12-01 10:37:39.175654079 +0000 UTC m=+0.031299465 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:37:39 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:37:39 np0005540825 podman[297349]: 2025-12-01 10:37:39.295148397 +0000 UTC m=+0.150793843 container init 056ddee50800e135bf5dfd49af2fa71bf510c4cc0926cda536e77db6268b16ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_vaughan, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:37:39 np0005540825 podman[297349]: 2025-12-01 10:37:39.3057466 +0000 UTC m=+0.161391966 container start 056ddee50800e135bf5dfd49af2fa71bf510c4cc0926cda536e77db6268b16ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_vaughan, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:37:39 np0005540825 podman[297349]: 2025-12-01 10:37:39.309849189 +0000 UTC m=+0.165494565 container attach 056ddee50800e135bf5dfd49af2fa71bf510c4cc0926cda536e77db6268b16ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:37:39 np0005540825 systemd[1]: libpod-056ddee50800e135bf5dfd49af2fa71bf510c4cc0926cda536e77db6268b16ae.scope: Deactivated successfully.
Dec  1 05:37:39 np0005540825 laughing_vaughan[297366]: 167 167
Dec  1 05:37:39 np0005540825 conmon[297366]: conmon 056ddee50800e135bf5d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-056ddee50800e135bf5dfd49af2fa71bf510c4cc0926cda536e77db6268b16ae.scope/container/memory.events
Dec  1 05:37:39 np0005540825 podman[297349]: 2025-12-01 10:37:39.317466202 +0000 UTC m=+0.173111568 container died 056ddee50800e135bf5dfd49af2fa71bf510c4cc0926cda536e77db6268b16ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  1 05:37:39 np0005540825 systemd[1]: var-lib-containers-storage-overlay-ccf7893c8832a49c17129c477a6ead486cffdb7b84aff3f85989c92d1756bcc2-merged.mount: Deactivated successfully.
Dec  1 05:37:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:37:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:37:39.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:37:39 np0005540825 podman[297349]: 2025-12-01 10:37:39.365038292 +0000 UTC m=+0.220683678 container remove 056ddee50800e135bf5dfd49af2fa71bf510c4cc0926cda536e77db6268b16ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_vaughan, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  1 05:37:39 np0005540825 systemd[1]: libpod-conmon-056ddee50800e135bf5dfd49af2fa71bf510c4cc0926cda536e77db6268b16ae.scope: Deactivated successfully.
Dec  1 05:37:39 np0005540825 podman[297389]: 2025-12-01 10:37:39.560989139 +0000 UTC m=+0.044869528 container create 2d4b7deabd3aa771d2e0c73f61b17b908a1443330171f37743a120021c1e40e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_beaver, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  1 05:37:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Optimize plan auto_2025-12-01_10:37:39
Dec  1 05:37:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  1 05:37:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] do_upmap
Dec  1 05:37:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'images', 'volumes', 'vms', 'backups', 'default.rgw.meta', 'default.rgw.control', '.nfs']
Dec  1 05:37:39 np0005540825 ceph-mgr[74709]: [balancer INFO root] prepared 0/10 upmap changes
Dec  1 05:37:39 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:37:39 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:37:39 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:39 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:37:39 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:37:39.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:37:39 np0005540825 systemd[1]: Started libpod-conmon-2d4b7deabd3aa771d2e0c73f61b17b908a1443330171f37743a120021c1e40e1.scope.
Dec  1 05:37:39 np0005540825 podman[297389]: 2025-12-01 10:37:39.540856712 +0000 UTC m=+0.024737141 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:37:39 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:37:39 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a230d1129e1da7d421bb882c01e8c248545467178673364b780e02e79a9f3b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:37:39 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a230d1129e1da7d421bb882c01e8c248545467178673364b780e02e79a9f3b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:37:39 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a230d1129e1da7d421bb882c01e8c248545467178673364b780e02e79a9f3b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:37:39 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a230d1129e1da7d421bb882c01e8c248545467178673364b780e02e79a9f3b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:37:39 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a230d1129e1da7d421bb882c01e8c248545467178673364b780e02e79a9f3b4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  1 05:37:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:37:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:37:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:37:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:37:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] scanning for idle connections..
Dec  1 05:37:39 np0005540825 ceph-mgr[74709]: [volumes INFO mgr_util] cleaning up connections: []
Dec  1 05:37:39 np0005540825 podman[297389]: 2025-12-01 10:37:39.661064118 +0000 UTC m=+0.144944597 container init 2d4b7deabd3aa771d2e0c73f61b17b908a1443330171f37743a120021c1e40e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_beaver, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec  1 05:37:39 np0005540825 podman[297389]: 2025-12-01 10:37:39.672838233 +0000 UTC m=+0.156718622 container start 2d4b7deabd3aa771d2e0c73f61b17b908a1443330171f37743a120021c1e40e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_beaver, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Dec  1 05:37:39 np0005540825 podman[297389]: 2025-12-01 10:37:39.67650341 +0000 UTC m=+0.160383799 container attach 2d4b7deabd3aa771d2e0c73f61b17b908a1443330171f37743a120021c1e40e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 05:37:40 np0005540825 naughty_beaver[297405]: --> passed data devices: 0 physical, 1 LVM
Dec  1 05:37:40 np0005540825 naughty_beaver[297405]: --> All data devices are unavailable
Dec  1 05:37:40 np0005540825 systemd[1]: libpod-2d4b7deabd3aa771d2e0c73f61b17b908a1443330171f37743a120021c1e40e1.scope: Deactivated successfully.
Dec  1 05:37:40 np0005540825 podman[297389]: 2025-12-01 10:37:40.052586073 +0000 UTC m=+0.536466462 container died 2d4b7deabd3aa771d2e0c73f61b17b908a1443330171f37743a120021c1e40e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_beaver, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec  1 05:37:40 np0005540825 systemd[1]: var-lib-containers-storage-overlay-7a230d1129e1da7d421bb882c01e8c248545467178673364b780e02e79a9f3b4-merged.mount: Deactivated successfully.
Dec  1 05:37:40 np0005540825 systemd-logind[789]: New session 59 of user zuul.
Dec  1 05:37:40 np0005540825 systemd[1]: Started Session 59 of User zuul.
Dec  1 05:37:40 np0005540825 podman[297389]: 2025-12-01 10:37:40.288990629 +0000 UTC m=+0.772871028 container remove 2d4b7deabd3aa771d2e0c73f61b17b908a1443330171f37743a120021c1e40e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:37:40 np0005540825 systemd[1]: libpod-conmon-2d4b7deabd3aa771d2e0c73f61b17b908a1443330171f37743a120021c1e40e1.scope: Deactivated successfully.
Dec  1 05:37:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  1 05:37:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:37:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  1 05:37:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  1 05:37:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:37:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:37:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  1 05:37:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] _maybe_adjust
Dec  1 05:37:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:37:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  1 05:37:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:37:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:37:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:37:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:37:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:37:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:37:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:37:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec  1 05:37:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:37:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  1 05:37:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:37:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:37:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:37:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  1 05:37:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:37:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  1 05:37:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:37:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  1 05:37:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:37:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  1 05:37:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  1 05:37:40 np0005540825 ceph-mgr[74709]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  1 05:37:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  1 05:37:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:37:40 np0005540825 ceph-mgr[74709]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  1 05:37:40 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1419: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 915 B/s rd, 0 op/s
Dec  1 05:37:41 np0005540825 podman[297562]: 2025-12-01 10:37:41.035399071 +0000 UTC m=+0.093953167 container create e247dc3a05e3980c2e11c4f9c405b693a6a4ff74c8e4af44a051c46cc9d0d76d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec  1 05:37:41 np0005540825 podman[297562]: 2025-12-01 10:37:40.962827085 +0000 UTC m=+0.021381181 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:37:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:37:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:37:41.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:37:41 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:37:41] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Dec  1 05:37:41 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:37:41] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Dec  1 05:37:41 np0005540825 systemd[1]: Started libpod-conmon-e247dc3a05e3980c2e11c4f9c405b693a6a4ff74c8e4af44a051c46cc9d0d76d.scope.
Dec  1 05:37:41 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:37:41 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:37:41 np0005540825 podman[297562]: 2025-12-01 10:37:41.433212912 +0000 UTC m=+0.491766998 container init e247dc3a05e3980c2e11c4f9c405b693a6a4ff74c8e4af44a051c46cc9d0d76d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_napier, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  1 05:37:41 np0005540825 podman[297562]: 2025-12-01 10:37:41.441281997 +0000 UTC m=+0.499836083 container start e247dc3a05e3980c2e11c4f9c405b693a6a4ff74c8e4af44a051c46cc9d0d76d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:37:41 np0005540825 podman[297562]: 2025-12-01 10:37:41.444878723 +0000 UTC m=+0.503432809 container attach e247dc3a05e3980c2e11c4f9c405b693a6a4ff74c8e4af44a051c46cc9d0d76d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_napier, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:37:41 np0005540825 quizzical_napier[297578]: 167 167
Dec  1 05:37:41 np0005540825 systemd[1]: libpod-e247dc3a05e3980c2e11c4f9c405b693a6a4ff74c8e4af44a051c46cc9d0d76d.scope: Deactivated successfully.
Dec  1 05:37:41 np0005540825 podman[297562]: 2025-12-01 10:37:41.450684878 +0000 UTC m=+0.509238964 container died e247dc3a05e3980c2e11c4f9c405b693a6a4ff74c8e4af44a051c46cc9d0d76d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:37:41 np0005540825 systemd[1]: var-lib-containers-storage-overlay-b0458ea8c102e9dc6f2fd61758175bac0c640a7e5106b0ffe1a7a418bcf09903-merged.mount: Deactivated successfully.
Dec  1 05:37:41 np0005540825 podman[297562]: 2025-12-01 10:37:41.498216426 +0000 UTC m=+0.556770512 container remove e247dc3a05e3980c2e11c4f9c405b693a6a4ff74c8e4af44a051c46cc9d0d76d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:37:41 np0005540825 systemd[1]: libpod-conmon-e247dc3a05e3980c2e11c4f9c405b693a6a4ff74c8e4af44a051c46cc9d0d76d.scope: Deactivated successfully.
Dec  1 05:37:41 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:41 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:37:41 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:37:41.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:37:41 np0005540825 podman[297645]: 2025-12-01 10:37:41.666220408 +0000 UTC m=+0.041142409 container create 8959ad58a11d7ef8fb5987791f68bbe4d428c2a5ae888489d51a4e0a7b3d2e90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  1 05:37:41 np0005540825 systemd[1]: Started libpod-conmon-8959ad58a11d7ef8fb5987791f68bbe4d428c2a5ae888489d51a4e0a7b3d2e90.scope.
Dec  1 05:37:41 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:37:41 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60b478a55c5c90c57e4def14cc34862f1762c5bc285ed676b9af1e60a5abb8a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:37:41 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60b478a55c5c90c57e4def14cc34862f1762c5bc285ed676b9af1e60a5abb8a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:37:41 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60b478a55c5c90c57e4def14cc34862f1762c5bc285ed676b9af1e60a5abb8a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:37:41 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60b478a55c5c90c57e4def14cc34862f1762c5bc285ed676b9af1e60a5abb8a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:37:41 np0005540825 podman[297645]: 2025-12-01 10:37:41.645442544 +0000 UTC m=+0.020364535 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:37:41 np0005540825 podman[297645]: 2025-12-01 10:37:41.752779717 +0000 UTC m=+0.127701738 container init 8959ad58a11d7ef8fb5987791f68bbe4d428c2a5ae888489d51a4e0a7b3d2e90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  1 05:37:41 np0005540825 podman[297645]: 2025-12-01 10:37:41.760075972 +0000 UTC m=+0.134997943 container start 8959ad58a11d7ef8fb5987791f68bbe4d428c2a5ae888489d51a4e0a7b3d2e90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_bassi, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  1 05:37:41 np0005540825 podman[297645]: 2025-12-01 10:37:41.763328638 +0000 UTC m=+0.138250719 container attach 8959ad58a11d7ef8fb5987791f68bbe4d428c2a5ae888489d51a4e0a7b3d2e90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  1 05:37:42 np0005540825 practical_bassi[297667]: {
Dec  1 05:37:42 np0005540825 practical_bassi[297667]:    "1": [
Dec  1 05:37:42 np0005540825 practical_bassi[297667]:        {
Dec  1 05:37:42 np0005540825 practical_bassi[297667]:            "devices": [
Dec  1 05:37:42 np0005540825 practical_bassi[297667]:                "/dev/loop3"
Dec  1 05:37:42 np0005540825 practical_bassi[297667]:            ],
Dec  1 05:37:42 np0005540825 practical_bassi[297667]:            "lv_name": "ceph_lv0",
Dec  1 05:37:42 np0005540825 practical_bassi[297667]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:37:42 np0005540825 practical_bassi[297667]:            "lv_size": "21470642176",
Dec  1 05:37:42 np0005540825 practical_bassi[297667]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=365f19c2-81e5-5edd-b6b4-280555214d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0faa9895-0b70-4c34-8548-ef8fc62fc047,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  1 05:37:42 np0005540825 practical_bassi[297667]:            "lv_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:37:42 np0005540825 practical_bassi[297667]:            "name": "ceph_lv0",
Dec  1 05:37:42 np0005540825 practical_bassi[297667]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:37:42 np0005540825 practical_bassi[297667]:            "tags": {
Dec  1 05:37:42 np0005540825 practical_bassi[297667]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  1 05:37:42 np0005540825 practical_bassi[297667]:                "ceph.block_uuid": "D610Ma-eufA-1RlA-VTcZ-ft4l-xe3K-80ghhX",
Dec  1 05:37:42 np0005540825 practical_bassi[297667]:                "ceph.cephx_lockbox_secret": "",
Dec  1 05:37:42 np0005540825 practical_bassi[297667]:                "ceph.cluster_fsid": "365f19c2-81e5-5edd-b6b4-280555214d3a",
Dec  1 05:37:42 np0005540825 practical_bassi[297667]:                "ceph.cluster_name": "ceph",
Dec  1 05:37:42 np0005540825 practical_bassi[297667]:                "ceph.crush_device_class": "",
Dec  1 05:37:42 np0005540825 practical_bassi[297667]:                "ceph.encrypted": "0",
Dec  1 05:37:42 np0005540825 practical_bassi[297667]:                "ceph.osd_fsid": "0faa9895-0b70-4c34-8548-ef8fc62fc047",
Dec  1 05:37:42 np0005540825 practical_bassi[297667]:                "ceph.osd_id": "1",
Dec  1 05:37:42 np0005540825 practical_bassi[297667]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  1 05:37:42 np0005540825 practical_bassi[297667]:                "ceph.type": "block",
Dec  1 05:37:42 np0005540825 practical_bassi[297667]:                "ceph.vdo": "0",
Dec  1 05:37:42 np0005540825 practical_bassi[297667]:                "ceph.with_tpm": "0"
Dec  1 05:37:42 np0005540825 practical_bassi[297667]:            },
Dec  1 05:37:42 np0005540825 practical_bassi[297667]:            "type": "block",
Dec  1 05:37:42 np0005540825 practical_bassi[297667]:            "vg_name": "ceph_vg0"
Dec  1 05:37:42 np0005540825 practical_bassi[297667]:        }
Dec  1 05:37:42 np0005540825 practical_bassi[297667]:    ]
Dec  1 05:37:42 np0005540825 practical_bassi[297667]: }
Dec  1 05:37:42 np0005540825 systemd[1]: libpod-8959ad58a11d7ef8fb5987791f68bbe4d428c2a5ae888489d51a4e0a7b3d2e90.scope: Deactivated successfully.
Dec  1 05:37:42 np0005540825 podman[297645]: 2025-12-01 10:37:42.084865626 +0000 UTC m=+0.459787637 container died 8959ad58a11d7ef8fb5987791f68bbe4d428c2a5ae888489d51a4e0a7b3d2e90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_bassi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  1 05:37:42 np0005540825 systemd[1]: var-lib-containers-storage-overlay-60b478a55c5c90c57e4def14cc34862f1762c5bc285ed676b9af1e60a5abb8a2-merged.mount: Deactivated successfully.
Dec  1 05:37:42 np0005540825 podman[297645]: 2025-12-01 10:37:42.132472416 +0000 UTC m=+0.507394387 container remove 8959ad58a11d7ef8fb5987791f68bbe4d428c2a5ae888489d51a4e0a7b3d2e90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_bassi, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  1 05:37:42 np0005540825 systemd[1]: libpod-conmon-8959ad58a11d7ef8fb5987791f68bbe4d428c2a5ae888489d51a4e0a7b3d2e90.scope: Deactivated successfully.
Dec  1 05:37:42 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1420: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 610 B/s rd, 0 op/s
Dec  1 05:37:42 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26776 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:42 np0005540825 podman[297871]: 2025-12-01 10:37:42.824948209 +0000 UTC m=+0.108186487 container create 7ee611291c0f7fd53917f09d2342d90691eec4f7277dfa6b87257f699d413cbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_haibt, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  1 05:37:42 np0005540825 podman[297871]: 2025-12-01 10:37:42.737868586 +0000 UTC m=+0.021106844 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:37:42 np0005540825 systemd[1]: Started libpod-conmon-7ee611291c0f7fd53917f09d2342d90691eec4f7277dfa6b87257f699d413cbf.scope.
Dec  1 05:37:42 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:37:42 np0005540825 podman[297871]: 2025-12-01 10:37:42.921499504 +0000 UTC m=+0.204737782 container init 7ee611291c0f7fd53917f09d2342d90691eec4f7277dfa6b87257f699d413cbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_haibt, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  1 05:37:42 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.17502 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:42 np0005540825 podman[297871]: 2025-12-01 10:37:42.93257884 +0000 UTC m=+0.215817088 container start 7ee611291c0f7fd53917f09d2342d90691eec4f7277dfa6b87257f699d413cbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec  1 05:37:42 np0005540825 podman[297871]: 2025-12-01 10:37:42.935910089 +0000 UTC m=+0.219148337 container attach 7ee611291c0f7fd53917f09d2342d90691eec4f7277dfa6b87257f699d413cbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_haibt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec  1 05:37:42 np0005540825 dreamy_haibt[297887]: 167 167
Dec  1 05:37:42 np0005540825 systemd[1]: libpod-7ee611291c0f7fd53917f09d2342d90691eec4f7277dfa6b87257f699d413cbf.scope: Deactivated successfully.
Dec  1 05:37:42 np0005540825 podman[297871]: 2025-12-01 10:37:42.938167809 +0000 UTC m=+0.221406057 container died 7ee611291c0f7fd53917f09d2342d90691eec4f7277dfa6b87257f699d413cbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec  1 05:37:42 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27320 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:42 np0005540825 systemd[1]: var-lib-containers-storage-overlay-5b80afed9ee8d5de397e704819902e863c28d73a6882bc161ffab7f53bfe4d03-merged.mount: Deactivated successfully.
Dec  1 05:37:42 np0005540825 podman[297871]: 2025-12-01 10:37:42.984790333 +0000 UTC m=+0.268028581 container remove 7ee611291c0f7fd53917f09d2342d90691eec4f7277dfa6b87257f699d413cbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_haibt, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  1 05:37:42 np0005540825 systemd[1]: libpod-conmon-7ee611291c0f7fd53917f09d2342d90691eec4f7277dfa6b87257f699d413cbf.scope: Deactivated successfully.
Dec  1 05:37:43 np0005540825 podman[297916]: 2025-12-01 10:37:43.148559981 +0000 UTC m=+0.044688563 container create 0ed70b768d20942e1eef84ff4c68e107c4ace58653f1ea3810f8b66ccd41dbfa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_jepsen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Dec  1 05:37:43 np0005540825 systemd[1]: Started libpod-conmon-0ed70b768d20942e1eef84ff4c68e107c4ace58653f1ea3810f8b66ccd41dbfa.scope.
Dec  1 05:37:43 np0005540825 systemd[1]: Started libcrun container.
Dec  1 05:37:43 np0005540825 podman[297916]: 2025-12-01 10:37:43.131083365 +0000 UTC m=+0.027211977 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  1 05:37:43 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60172d6ad4024049dfcb5f2d7c4811ee6e2de3b69325d77482ddcbf57ead510c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  1 05:37:43 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60172d6ad4024049dfcb5f2d7c4811ee6e2de3b69325d77482ddcbf57ead510c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  1 05:37:43 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60172d6ad4024049dfcb5f2d7c4811ee6e2de3b69325d77482ddcbf57ead510c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  1 05:37:43 np0005540825 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60172d6ad4024049dfcb5f2d7c4811ee6e2de3b69325d77482ddcbf57ead510c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  1 05:37:43 np0005540825 podman[297916]: 2025-12-01 10:37:43.239882398 +0000 UTC m=+0.136011000 container init 0ed70b768d20942e1eef84ff4c68e107c4ace58653f1ea3810f8b66ccd41dbfa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_jepsen, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  1 05:37:43 np0005540825 podman[297916]: 2025-12-01 10:37:43.248189409 +0000 UTC m=+0.144317991 container start 0ed70b768d20942e1eef84ff4c68e107c4ace58653f1ea3810f8b66ccd41dbfa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_jepsen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:37:43 np0005540825 podman[297916]: 2025-12-01 10:37:43.251811716 +0000 UTC m=+0.147940308 container attach 0ed70b768d20942e1eef84ff4c68e107c4ace58653f1ea3810f8b66ccd41dbfa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325)
Dec  1 05:37:43 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26788 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:37:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:37:43.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:37:43 np0005540825 nova_compute[256151]: 2025-12-01 10:37:43.461 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:37:43 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.17511 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:43 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27326 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:43 np0005540825 nova_compute[256151]: 2025-12-01 10:37:43.563 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:37:43 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:43 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:37:43 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:37:43.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:37:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:37:43.806Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:37:43 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:37:43.808Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:37:43 np0005540825 lvm[298056]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 05:37:43 np0005540825 lvm[298056]: VG ceph_vg0 finished
Dec  1 05:37:43 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Dec  1 05:37:43 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1987919371' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec  1 05:37:43 np0005540825 angry_jepsen[297936]: {}
Dec  1 05:37:43 np0005540825 systemd[1]: libpod-0ed70b768d20942e1eef84ff4c68e107c4ace58653f1ea3810f8b66ccd41dbfa.scope: Deactivated successfully.
Dec  1 05:37:43 np0005540825 podman[297916]: 2025-12-01 10:37:43.990583534 +0000 UTC m=+0.886712126 container died 0ed70b768d20942e1eef84ff4c68e107c4ace58653f1ea3810f8b66ccd41dbfa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_jepsen, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  1 05:37:43 np0005540825 systemd[1]: libpod-0ed70b768d20942e1eef84ff4c68e107c4ace58653f1ea3810f8b66ccd41dbfa.scope: Consumed 1.071s CPU time.
Dec  1 05:37:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:37:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:37:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:43 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:37:44 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:44 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:37:44 np0005540825 systemd[1]: var-lib-containers-storage-overlay-60172d6ad4024049dfcb5f2d7c4811ee6e2de3b69325d77482ddcbf57ead510c-merged.mount: Deactivated successfully.
Dec  1 05:37:44 np0005540825 podman[297916]: 2025-12-01 10:37:44.040948947 +0000 UTC m=+0.937077539 container remove 0ed70b768d20942e1eef84ff4c68e107c4ace58653f1ea3810f8b66ccd41dbfa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_jepsen, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  1 05:37:44 np0005540825 systemd[1]: libpod-conmon-0ed70b768d20942e1eef84ff4c68e107c4ace58653f1ea3810f8b66ccd41dbfa.scope: Deactivated successfully.
Dec  1 05:37:44 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  1 05:37:44 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:37:44 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  1 05:37:44 np0005540825 ceph-mon[74416]: log_channel(audit) log [INF] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:37:44 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1421: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 610 B/s rd, 0 op/s
Dec  1 05:37:45 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:37:45 np0005540825 ceph-mon[74416]: from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' 
Dec  1 05:37:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:37:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:37:45.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:37:45 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:45 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:37:45 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:37:45.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:37:46 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:37:46 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1422: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 610 B/s rd, 0 op/s
Dec  1 05:37:47 np0005540825 ovs-vsctl[298191]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec  1 05:37:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:37:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:37:47.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:37:47 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:37:47.432Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:37:47 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:47 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:37:47 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:37:47.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:37:48 np0005540825 virtqemud[255660]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec  1 05:37:48 np0005540825 virtqemud[255660]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec  1 05:37:48 np0005540825 virtqemud[255660]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec  1 05:37:48 np0005540825 nova_compute[256151]: 2025-12-01 10:37:48.463 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:37:48 np0005540825 nova_compute[256151]: 2025-12-01 10:37:48.565 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:37:48 np0005540825 ceph-mds[95644]: mds.cephfs.compute-0.xijran asok_command: cache status {prefix=cache status} (starting...)
Dec  1 05:37:48 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1423: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 610 B/s rd, 0 op/s
Dec  1 05:37:48 np0005540825 lvm[298516]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  1 05:37:48 np0005540825 lvm[298516]: VG ceph_vg0 finished
Dec  1 05:37:48 np0005540825 ceph-mds[95644]: mds.cephfs.compute-0.xijran asok_command: client ls {prefix=client ls} (starting...)
Dec  1 05:37:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:37:48.938Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:37:48 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:37:48.940Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:37:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:37:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:37:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:48 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:37:49 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:49 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:37:49 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26800 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:37:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:37:49.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:37:49 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Dec  1 05:37:49 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec  1 05:37:49 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.17523 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:49 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26815 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:49 np0005540825 ceph-mds[95644]: mds.cephfs.compute-0.xijran asok_command: damage ls {prefix=damage ls} (starting...)
Dec  1 05:37:49 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Dec  1 05:37:49 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2575364442' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec  1 05:37:49 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27341 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:49 np0005540825 ceph-mds[95644]: mds.cephfs.compute-0.xijran asok_command: dump loads {prefix=dump loads} (starting...)
Dec  1 05:37:49 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:49 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:37:49 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:37:49.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:37:49 np0005540825 ceph-mds[95644]: mds.cephfs.compute-0.xijran asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec  1 05:37:49 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26824 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:49 np0005540825 ceph-mds[95644]: mds.cephfs.compute-0.xijran asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec  1 05:37:49 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.17532 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:49 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Dec  1 05:37:49 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec  1 05:37:49 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27356 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:49 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  1 05:37:49 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1431723179' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  1 05:37:49 np0005540825 ceph-mds[95644]: mds.cephfs.compute-0.xijran asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec  1 05:37:50 np0005540825 ceph-mds[95644]: mds.cephfs.compute-0.xijran asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec  1 05:37:50 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.17544 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Dec  1 05:37:50 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/256025264' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec  1 05:37:50 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26833 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:50 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27371 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:50 np0005540825 ceph-mds[95644]: mds.cephfs.compute-0.xijran asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec  1 05:37:50 np0005540825 ceph-mds[95644]: mds.cephfs.compute-0.xijran asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec  1 05:37:50 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1424: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:37:50 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.17562 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:50 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27386 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:50 np0005540825 ceph-mds[95644]: mds.cephfs.compute-0.xijran asok_command: ops {prefix=ops} (starting...)
Dec  1 05:37:50 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Dec  1 05:37:50 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/287918769' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec  1 05:37:51 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Dec  1 05:37:51 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2564244990' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec  1 05:37:51 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26869 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:51 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.17583 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:37:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:37:51.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:37:51 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:37:51] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Dec  1 05:37:51 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:37:51] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Dec  1 05:37:51 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:37:51 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26881 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:51 np0005540825 ceph-mds[95644]: mds.cephfs.compute-0.xijran asok_command: session ls {prefix=session ls} (starting...)
Dec  1 05:37:51 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27422 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:51 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec  1 05:37:51 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2608612401' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec  1 05:37:51 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:51 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:37:51 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:37:51.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:37:51 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26896 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:51 np0005540825 ceph-mds[95644]: mds.cephfs.compute-0.xijran asok_command: status {prefix=status} (starting...)
Dec  1 05:37:51 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27431 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:51 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Dec  1 05:37:51 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec  1 05:37:52 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec  1 05:37:52 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1025458884' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec  1 05:37:52 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Dec  1 05:37:52 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3430623412' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec  1 05:37:52 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Dec  1 05:37:52 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec  1 05:37:52 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec  1 05:37:52 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4055162567' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec  1 05:37:52 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1425: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:37:52 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Dec  1 05:37:52 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/493650774' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec  1 05:37:52 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26935 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:52 np0005540825 ceph-mgr[74709]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  1 05:37:52 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T10:37:52.809+0000 7f5445f76640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  1 05:37:52 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  1 05:37:52 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3066064762' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  1 05:37:53 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.17631 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T10:37:53.164+0000 7f5445f76640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  1 05:37:53 np0005540825 ceph-mgr[74709]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  1 05:37:53 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27479 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: 2025-12-01T10:37:53.219+0000 7f5445f76640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  1 05:37:53 np0005540825 ceph-mgr[74709]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  1 05:37:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:37:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:37:53.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:37:53 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec  1 05:37:53 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3661865303' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec  1 05:37:53 np0005540825 nova_compute[256151]: 2025-12-01 10:37:53.466 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:37:53 np0005540825 nova_compute[256151]: 2025-12-01 10:37:53.566 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:37:53 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Dec  1 05:37:53 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3575315556' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec  1 05:37:53 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:53 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:37:53 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:37:53.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:37:53 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26983 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:53 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:37:53.809Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:37:53 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec  1 05:37:53 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/758342641' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec  1 05:37:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:37:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:37:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:53 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:37:54 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:54 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:37:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Dec  1 05:37:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1670393597' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec  1 05:37:54 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27521 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:54 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.26998 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:54 np0005540825 podman[299273]: 2025-12-01 10:37:54.22656188 +0000 UTC m=+0.089562460 container health_status 77ffcaa6087752bb0d8f447908bba6ba63f9cc447beb74c0b4ac456d77f211ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  1 05:37:54 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.17670 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  1 05:37:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='mgr.14643 192.168.122.100:0/1679537773' entity='mgr.compute-0.fospow' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  1 05:37:54 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec  1 05:37:54 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4020519107' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec  1 05:37:54 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27016 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:54 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1426: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:37:54 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27533 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:54 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.17682 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:55 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27560 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec  1 05:37:55 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2595146075' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec  1 05:37:55 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27034 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:55 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.17697 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 3555328 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966262 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 3555328 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 3555328 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 3555328 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 3555328 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.752355576s of 21.768016815s, submitted: 3
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 3555328 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966526 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 3555328 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 3555328 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 3555328 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 3555328 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 3538944 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 968038 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 3538944 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 3538944 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 3538944 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 3538944 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 3538944 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 968038 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 3538944 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.076677322s of 12.088689804s, submitted: 3
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 3538944 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 3538944 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 3530752 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea044cc800 session 0x55ea0679c960
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea03eb1c00 session 0x55ea03efb860
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 3530752 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 967183 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 3506176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 3506176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 3506176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 3506176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 3506176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 967183 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 3506176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 3506176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 3506176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 3506176 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 3489792 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.910959244s of 13.925210953s, submitted: 3
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 967315 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 3481600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 3481600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 3481600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 3481600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 3481600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 968827 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 3481600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 3481600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 3481600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 3481600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 3481600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969748 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 3481600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 3481600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 3481600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 3481600 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 3473408 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.706723213s of 14.736872673s, submitted: 4
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969616 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 3473408 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 3473408 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 3473408 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 3473408 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 3457024 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969616 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 3457024 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 3457024 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 3457024 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 3457024 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 3457024 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969616 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 3457024 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 3457024 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 3457024 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 3457024 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 3448832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969616 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 3448832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 3448832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 3448832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 3448832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 3448832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969616 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 3448832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 3448832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 3448832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 3448832 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 3432448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969616 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 3432448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 3432448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 3432448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 3432448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 3432448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969616 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 3432448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 3432448 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 3424256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 3424256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 3424256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969616 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 3424256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 3424256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 3424256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 3424256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 3424256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969616 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 3424256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 3424256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 3424256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 3424256 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 3407872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969616 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 3407872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 3407872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 3407872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea04a10800 session 0x55ea05bf2d20
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea04ac8800 session 0x55ea03efad20
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 3407872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 3407872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969616 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 3407872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 3407872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 3407872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 3407872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 3407872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969616 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 3407872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 3407872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 3407872 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 3399680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 59.579448700s of 59.583278656s, submitted: 1
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea03158c00 session 0x55ea05bf6b40
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea044cc800 session 0x55ea04d56b40
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 3399680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969748 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 3399680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 3399680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 3399680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 3399680 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81510400 unmapped: 3383296 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971260 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81526784 unmapped: 3366912 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81526784 unmapped: 3366912 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81526784 unmapped: 3366912 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81526784 unmapped: 3366912 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: mgrc ms_handle_reset ms_handle_reset con 0x55ea04ca1000
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1444264366
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1444264366,v1:192.168.122.100:6801/1444264366]
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: mgrc handle_mgr_configure stats_period=5
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 3186688 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea04a11400 session 0x55ea0684bc20
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.972988129s of 10.982455254s, submitted: 2
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971392 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 3170304 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 3170304 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 3170304 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 3170304 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 3145728 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972181 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 3145728 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 3145728 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 3145728 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 3145728 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 3145728 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 975205 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 3145728 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 3145728 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.104986191s of 12.128035545s, submitted: 6
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 3145728 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 3145728 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 3145728 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974482 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 3145728 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 3145728 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 3145728 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 3145728 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 3129344 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974482 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 3129344 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 3129344 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea03eb1c00 session 0x55ea038c6d20
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea0315dc00 session 0x55ea06c174a0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 3129344 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 3129344 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 3129344 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974482 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 3129344 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 3129344 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 3129344 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 3129344 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea06506400 session 0x55ea06664780
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea04ac8800 session 0x55ea06acd0e0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 3129344 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974482 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 3129344 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 3129344 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 3129344 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.088794708s of 21.097955704s, submitted: 3
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 3112960 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 3112960 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974614 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 3112960 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 3104768 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 3104768 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 3088384 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 3088384 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974746 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 3072000 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 3072000 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 3072000 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 3072000 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 3072000 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974155 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 3072000 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 9221 writes, 35K keys, 9221 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 9221 writes, 2074 syncs, 4.45 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 826 writes, 1269 keys, 826 commit groups, 1.0 writes per commit group, ingest: 0.42 MB, 0.00 MB/s#012Interval WAL: 826 writes, 400 syncs, 2.06 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ea023ad350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ea023ad350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.976496696s of 12.989195824s, submitted: 3
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 3072000 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 3039232 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 3039232 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 3039232 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973432 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 3039232 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 3039232 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 3039232 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 3039232 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 3039232 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973300 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 3039232 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973300 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea03158400 session 0x55ea05bf63c0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea05c62c00 session 0x55ea03efa960
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973300 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread fragmentation_score=0.000030 took=0.000075s
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973300 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973300 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.629129410s of 29.642702103s, submitted: 3
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973432 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974944 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.602149010s of 13.611645699s, submitted: 2
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974812 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974812 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974812 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3031040 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974812 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974812 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974812 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea05b23000 session 0x55ea05c59e00
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea06639800 session 0x55ea05c592c0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974812 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974812 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 40.389781952s of 40.393127441s, submitted: 1
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974944 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3022848 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976456 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977377 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3014656 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.319105148s of 13.360455513s, submitted: 4
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977245 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea03eb1c00 session 0x55ea069e3e00
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea0315dc00 session 0x55ea05c58960
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea06506400 session 0x55ea066670e0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea03eb0000 session 0x55ea06946f00
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977245 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977245 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.696228981s of 14.700200081s, submitted: 1
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 1941504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 1941504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 1941504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977509 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 1941504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 1941504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 1941504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 1941504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 1941504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977509 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 1941504 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 2990080 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 2990080 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 2990080 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.131916046s of 12.145732880s, submitted: 3
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 2990080 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976195 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 2990080 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 2990080 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 2990080 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 2990080 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 3006464 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976063 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 3006464 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 3006464 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 3006464 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 3006464 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 3006464 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976063 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 3006464 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 3006464 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 3006464 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 3006464 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 3006464 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976063 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 3006464 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 3006464 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976063 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 2998272 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.728481293s of 25.739419937s, submitted: 3
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976063 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 2924544 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 2768896 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83255296 unmapped: 2686976 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83255296 unmapped: 2686976 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83255296 unmapped: 2686976 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976063 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83255296 unmapped: 2686976 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976063 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976063 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976063 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea03eb1c00 session 0x55ea03ed21e0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976063 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea05b23000 session 0x55ea03d9f860
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea0315dc00 session 0x55ea06849e00
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976063 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 2678784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.944995880s of 32.024192810s, submitted: 378
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 2670592 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 2670592 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 2670592 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 2670592 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976195 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 2670592 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 2670592 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 2670592 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 2670592 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 2670592 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976327 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 2670592 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 2670592 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 2670592 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 2670592 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.414603233s of 12.424942970s, submitted: 2
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 2670592 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977839 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 2670592 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 2670592 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 2670592 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 2670592 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 2670592 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977707 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83279872 unmapped: 2662400 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977575 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977575 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977575 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977575 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977575 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 2654208 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977575 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977575 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea0315dc00 session 0x55ea06acc5a0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977575 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea03eb0000 session 0x55ea06d60d20
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea03eb1c00 session 0x55ea04d56b40
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977575 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 55.334781647s of 55.353782654s, submitted: 4
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977707 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977839 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.326207161s of 10.340394974s, submitted: 3
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978760 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 2646016 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980272 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980008 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea06639000 session 0x55ea05bf6780
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea06506400 session 0x55ea06ab25a0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980008 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980008 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.171087265s of 24.185573578s, submitted: 4
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980140 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 2637824 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984676 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984085 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.075248718s of 12.094985962s, submitted: 5
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983362 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983362 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea06505800 session 0x55ea06667e00
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea04ac8800 session 0x55ea05c59680
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983362 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983362 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.952816010s of 22.962663651s, submitted: 2
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983494 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983494 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982312 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.068272591s of 12.090098381s, submitted: 3
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 2629632 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 2605056 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 2605056 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981589 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 2605056 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 2605056 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 2605056 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea0315dc00 session 0x55ea04ce2f00
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 ms_handle_reset con 0x55ea05b23000 session 0x55ea05bf30e0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 2605056 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 2605056 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981589 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 2605056 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 2605056 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 2605056 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 2605056 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 2605056 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981589 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 2605056 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 2605056 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 2605056 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 2605056 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.562101364s of 18.569944382s, submitted: 2
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x157448/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 2596864 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981721 data_alloc: 218103808 data_used: 270336
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 2596864 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 136 handle_osd_map epochs [136,137], i have 136, src has [1,137]
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 2596864 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 2596864 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 138 ms_handle_reset con 0x55ea03eb1400 session 0x55ea03d9f680
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 138 ms_handle_reset con 0x55ea03eb1c00 session 0x55ea04cedc20
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb987000/0x0/0x4ffc00000, data 0xdcb720/0xe84000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 19218432 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 138 handle_osd_map epochs [138,139], i have 138, src has [1,139]
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 139 ms_handle_reset con 0x55ea0315bc00 session 0x55ea0684bc20
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 19202048 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080011 data_alloc: 218103808 data_used: 274432
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 19202048 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fb982000/0x0/0x4ffc00000, data 0xdcd8a1/0xe88000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 140 ms_handle_reset con 0x55ea0315dc00 session 0x55ea05bdab40
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 18137088 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 18120704 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 18120704 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 18120704 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1081781 data_alloc: 218103808 data_used: 274432
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 18120704 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fb980000/0x0/0x4ffc00000, data 0xdcf9ff/0xe8b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.686079979s of 12.876276970s, submitted: 45
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 17072128 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fb980000/0x0/0x4ffc00000, data 0xdcf9ff/0xe8b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 17072128 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fb97d000/0x0/0x4ffc00000, data 0xdd1a27/0xe8e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 17072128 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 17072128 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083699 data_alloc: 218103808 data_used: 274432
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fb97e000/0x0/0x4ffc00000, data 0xdd1a27/0xe8e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 17072128 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 17072128 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fb97e000/0x0/0x4ffc00000, data 0xdd1a27/0xe8e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 17072128 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 17072128 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 17072128 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083699 data_alloc: 218103808 data_used: 274432
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 17063936 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fb97e000/0x0/0x4ffc00000, data 0xdd1a27/0xe8e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 17063936 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 17063936 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fb97e000/0x0/0x4ffc00000, data 0xdd1a27/0xe8e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 17063936 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 17063936 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086723 data_alloc: 218103808 data_used: 274432
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 17063936 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.712406158s of 14.738780975s, submitted: 18
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 17055744 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 17055744 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 17055744 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fb97e000/0x0/0x4ffc00000, data 0xdd1a27/0xe8e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 17055744 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086591 data_alloc: 218103808 data_used: 274432
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 17055744 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 141 ms_handle_reset con 0x55ea05b23000 session 0x55ea038c6d20
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 141 ms_handle_reset con 0x55ea04ac8800 session 0x55ea03d9e960
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 17055744 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 17055744 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 17055744 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fb97e000/0x0/0x4ffc00000, data 0xdd1a27/0xe8e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 17055744 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086591 data_alloc: 218103808 data_used: 274432
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 17055744 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 17055744 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 17055744 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fb97e000/0x0/0x4ffc00000, data 0xdd1a27/0xe8e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 17055744 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fb97e000/0x0/0x4ffc00000, data 0xdd1a27/0xe8e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 17055744 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086591 data_alloc: 218103808 data_used: 274432
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 17055744 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fb97e000/0x0/0x4ffc00000, data 0xdd1a27/0xe8e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 17055744 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.687610626s of 15.697146416s, submitted: 2
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fb97e000/0x0/0x4ffc00000, data 0xdd1a27/0xe8e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 17055744 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 17055744 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 17055744 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086723 data_alloc: 218103808 data_used: 274432
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 97181696 unmapped: 5545984 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fb97e000/0x0/0x4ffc00000, data 0xdd1a27/0xe8e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 141 ms_handle_reset con 0x55ea0315dc00 session 0x55ea06c8ad20
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 97181696 unmapped: 5545984 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 141 handle_osd_map epochs [142,142], i have 142, src has [1,142]
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 97181696 unmapped: 5545984 heap: 102727680 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 98902016 unmapped: 5996544 heap: 104898560 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 143 ms_handle_reset con 0x55ea03eb1c00 session 0x55ea066674a0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 143 ms_handle_reset con 0x55ea05b23000 session 0x55ea05bf3a40
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 143 ms_handle_reset con 0x55ea04ca1c00 session 0x55ea06667a40
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fb976000/0x0/0x4ffc00000, data 0xdd5d12/0xe95000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 143 ms_handle_reset con 0x55ea06507c00 session 0x55ea066654a0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 99106816 unmapped: 9469952 heap: 108576768 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 143 ms_handle_reset con 0x55ea0315dc00 session 0x55ea04d0dc20
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178396 data_alloc: 234881024 data_used: 11739136
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fb2c7000/0x0/0x4ffc00000, data 0x1484d12/0x1544000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 99106816 unmapped: 9469952 heap: 108576768 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 99106816 unmapped: 9469952 heap: 108576768 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fb2c7000/0x0/0x4ffc00000, data 0x1484d12/0x1544000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 99106816 unmapped: 9469952 heap: 108576768 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 99106816 unmapped: 9469952 heap: 108576768 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.715719223s of 12.101661682s, submitted: 36
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 99106816 unmapped: 9469952 heap: 108576768 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176038 data_alloc: 234881024 data_used: 11739136
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fb2c8000/0x0/0x4ffc00000, data 0x1484d12/0x1544000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 99106816 unmapped: 9469952 heap: 108576768 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 143 ms_handle_reset con 0x55ea03eb1c00 session 0x55ea04cecf00
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fb2c8000/0x0/0x4ffc00000, data 0x1484d12/0x1544000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 99106816 unmapped: 9469952 heap: 108576768 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb2c4000/0x0/0x4ffc00000, data 0x1486d3a/0x1547000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 99115008 unmapped: 9461760 heap: 108576768 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 102555648 unmapped: 6021120 heap: 108576768 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb2c4000/0x0/0x4ffc00000, data 0x1486d3a/0x1547000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 105463808 unmapped: 3112960 heap: 108576768 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226388 data_alloc: 234881024 data_used: 18665472
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 105463808 unmapped: 3112960 heap: 108576768 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 105463808 unmapped: 3112960 heap: 108576768 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 105463808 unmapped: 3112960 heap: 108576768 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb2c5000/0x0/0x4ffc00000, data 0x1486d3a/0x1547000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 105496576 unmapped: 3080192 heap: 108576768 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 105496576 unmapped: 3080192 heap: 108576768 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226388 data_alloc: 234881024 data_used: 18665472
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 105496576 unmapped: 3080192 heap: 108576768 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb2c5000/0x0/0x4ffc00000, data 0x1486d3a/0x1547000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 105529344 unmapped: 3047424 heap: 108576768 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 105529344 unmapped: 3047424 heap: 108576768 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 105529344 unmapped: 3047424 heap: 108576768 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb2c5000/0x0/0x4ffc00000, data 0x1486d3a/0x1547000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.687959671s of 14.876306534s, submitted: 14
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113434624 unmapped: 5038080 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303308 data_alloc: 234881024 data_used: 19673088
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03eb0000 session 0x55ea06ab2f00
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea06506400 session 0x55ea06ab2d20
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa94c000/0x0/0x4ffc00000, data 0x1dffd3a/0x1ec0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113500160 unmapped: 4972544 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 4759552 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 4759552 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 4759552 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 4759552 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1308934 data_alloc: 234881024 data_used: 19836928
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 4759552 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f972a000/0x0/0x4ffc00000, data 0x1e81d3a/0x1f42000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 4726784 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f972a000/0x0/0x4ffc00000, data 0x1e81d3a/0x1f42000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 4726784 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 4726784 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 4726784 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1308934 data_alloc: 234881024 data_used: 19836928
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 4726784 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.651059151s of 12.785633087s, submitted: 60
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 4743168 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f972a000/0x0/0x4ffc00000, data 0x1e81d3a/0x1f42000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 4743168 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 4743168 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 4743168 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f972a000/0x0/0x4ffc00000, data 0x1e81d3a/0x1f42000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1309218 data_alloc: 234881024 data_used: 19841024
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 4743168 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 4743168 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 4743168 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f972a000/0x0/0x4ffc00000, data 0x1e81d3a/0x1f42000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 4743168 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 4743168 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1309218 data_alloc: 234881024 data_used: 19841024
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 4743168 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea06505800 session 0x55ea03e4fa40
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b0a400 session 0x55ea06c8a000
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 4743168 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 4743168 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.128098488s of 12.134003639s, submitted: 1
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 5537792 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f972a000/0x0/0x4ffc00000, data 0x1e81d3a/0x1f42000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 5537792 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1302979 data_alloc: 234881024 data_used: 19841024
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315dc00 session 0x55ea045ffc20
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03eb0000 session 0x55ea0679d4a0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03eb1c00 session 0x55ea06ab3680
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 5537792 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b0a400 session 0x55ea069e23c0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114507776 unmapped: 3964928 heap: 118472704 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea06505800 session 0x55ea04cec000
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315dc00 session 0x55ea06c8a1e0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03eb0000 session 0x55ea045e5e00
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03eb1c00 session 0x55ea045fef00
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b0a400 session 0x55ea06acc780
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea06506400 session 0x55ea06846b40
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 115408896 unmapped: 11460608 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 115408896 unmapped: 11460608 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 115408896 unmapped: 11460608 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352704 data_alloc: 234881024 data_used: 20365312
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9180000/0x0/0x4ffc00000, data 0x242ad9c/0x24ec000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315dc00 session 0x55ea038c6780
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9180000/0x0/0x4ffc00000, data 0x242ad9c/0x24ec000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 11444224 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 11444224 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9180000/0x0/0x4ffc00000, data 0x242ad9c/0x24ec000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [0,0,1])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9180000/0x0/0x4ffc00000, data 0x242ad9c/0x24ec000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03eb1c00 session 0x55ea06946960
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 115064832 unmapped: 11804672 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 115064832 unmapped: 11804672 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b0a400 session 0x55ea03ed32c0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.177427292s of 10.367736816s, submitted: 40
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea04657400 session 0x55ea03ea3680
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 11788288 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1354354 data_alloc: 234881024 data_used: 20365312
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 11788288 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 11788288 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f917f000/0x0/0x4ffc00000, data 0x242adac/0x24ed000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 11788288 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 11788288 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 10551296 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1385514 data_alloc: 234881024 data_used: 24961024
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119365632 unmapped: 7503872 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f917f000/0x0/0x4ffc00000, data 0x242adac/0x24ed000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119365632 unmapped: 7503872 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119365632 unmapped: 7503872 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119365632 unmapped: 7503872 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119365632 unmapped: 7503872 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1385514 data_alloc: 234881024 data_used: 24961024
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119365632 unmapped: 7503872 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.596323013s of 12.603732109s, submitted: 2
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119365632 unmapped: 7503872 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f917f000/0x0/0x4ffc00000, data 0x242adac/0x24ed000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119398400 unmapped: 7471104 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119398400 unmapped: 7471104 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119398400 unmapped: 7471104 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1388042 data_alloc: 234881024 data_used: 25010176
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119324672 unmapped: 7544832 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9023000/0x0/0x4ffc00000, data 0x2585dac/0x2648000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 118833152 unmapped: 8036352 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 118833152 unmapped: 8036352 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 118833152 unmapped: 8036352 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8fe6000/0x0/0x4ffc00000, data 0x25c2dac/0x2685000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 118865920 unmapped: 8003584 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1405528 data_alloc: 234881024 data_used: 25051136
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 118865920 unmapped: 8003584 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 118849536 unmapped: 8019968 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.311245918s of 10.453396797s, submitted: 46
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119103488 unmapped: 7766016 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8fc8000/0x0/0x4ffc00000, data 0x25e1dac/0x26a4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119136256 unmapped: 7733248 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119136256 unmapped: 7733248 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b22800 session 0x55ea06accd20
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea06638800 session 0x55ea06acc3c0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403360 data_alloc: 234881024 data_used: 25055232
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 10452992 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315dc00 session 0x55ea04d0a1e0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 10444800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 10444800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 10444800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f935f000/0x0/0x4ffc00000, data 0x1e81d3a/0x1f42000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 10444800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313783 data_alloc: 234881024 data_used: 20365312
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 10444800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f935f000/0x0/0x4ffc00000, data 0x1e81d3a/0x1f42000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 10444800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 10444800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 10444800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f935f000/0x0/0x4ffc00000, data 0x1e81d3a/0x1f42000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.488372803s of 12.584567070s, submitted: 27
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea04ca1c00 session 0x55ea03d9fc20
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b23000 session 0x55ea0679c000
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 10428416 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146657 data_alloc: 234881024 data_used: 12271616
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03eb1c00 session 0x55ea06acdc20
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa7d4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145841 data_alloc: 234881024 data_used: 12271616
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa7d4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa7d4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145841 data_alloc: 234881024 data_used: 12271616
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa7d4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa7d4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145841 data_alloc: 234881024 data_used: 12271616
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa7d4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145841 data_alloc: 234881024 data_used: 12271616
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 16097280 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315dc00 session 0x55ea066670e0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea04ca1c00 session 0x55ea060be960
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b23000 session 0x55ea05c58000
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea06638800 session 0x55ea05bf70e0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.042076111s of 23.134502411s, submitted: 31
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea04657400 session 0x55ea045e4780
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315dc00 session 0x55ea04d0c1e0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea04ca1c00 session 0x55ea04d574a0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110215168 unmapped: 16654336 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b23000 session 0x55ea03ea5e00
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea06638800 session 0x55ea06ab3680
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110215168 unmapped: 16654336 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03eb0000 session 0x55ea069e34a0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa6ce000/0x0/0x4ffc00000, data 0xedcd4a/0xf9e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110215168 unmapped: 16654336 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161065 data_alloc: 234881024 data_used: 12271616
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110215168 unmapped: 16654336 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110215168 unmapped: 16654336 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa6ce000/0x0/0x4ffc00000, data 0xedcd4a/0xf9e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110215168 unmapped: 16654336 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110215168 unmapped: 16654336 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315dc00 session 0x55ea06946000
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110231552 unmapped: 16637952 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161065 data_alloc: 234881024 data_used: 12271616
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea04ca1c00 session 0x55ea06c8b680
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b23000 session 0x55ea06acd860
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110231552 unmapped: 16637952 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea06638800 session 0x55ea06c163c0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa6a9000/0x0/0x4ffc00000, data 0xf00d5a/0xfc3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110542848 unmapped: 16326656 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.804592133s of 10.089959145s, submitted: 14
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 16302080 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 16302080 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa6a9000/0x0/0x4ffc00000, data 0xf00d5a/0xfc3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 16302080 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174152 data_alloc: 234881024 data_used: 13336576
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa6a9000/0x0/0x4ffc00000, data 0xf00d5a/0xfc3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 16302080 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 16302080 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 16302080 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 16302080 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 16302080 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174152 data_alloc: 234881024 data_used: 13336576
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 16302080 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa6a9000/0x0/0x4ffc00000, data 0xf00d5a/0xfc3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 16302080 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 16302080 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea06503c00 session 0x55ea06d60d20
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea044cc400 session 0x55ea04d56b40
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea04ac8800 session 0x55ea038c63c0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315bc00 session 0x55ea045be1e0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa6a9000/0x0/0x4ffc00000, data 0xf00d5a/0xfc3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 16302080 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa6a9000/0x0/0x4ffc00000, data 0xf00d5a/0xfc3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.164918900s of 12.164921761s, submitted: 0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111108096 unmapped: 15761408 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225180 data_alloc: 234881024 data_used: 13484032
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 14311424 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 13516800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 13516800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 13516800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 13516800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233384 data_alloc: 234881024 data_used: 13656064
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0x15c0d5a/0x1683000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 13516800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 13516800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 13516800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 13516800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 13516800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233096 data_alloc: 234881024 data_used: 13656064
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fe6000/0x0/0x4ffc00000, data 0x15c3d5a/0x1686000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 13516800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 13516800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 13516800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 13516800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 13516800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233096 data_alloc: 234881024 data_used: 13656064
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 13516800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fe6000/0x0/0x4ffc00000, data 0x15c3d5a/0x1686000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 13516800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 13516800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 13516800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.687915802s of 19.852085114s, submitted: 57
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 13516800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232984 data_alloc: 234881024 data_used: 13656064
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 13516800 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fe5000/0x0/0x4ffc00000, data 0x15c4d5a/0x1687000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113360896 unmapped: 13508608 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113360896 unmapped: 13508608 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b0a400 session 0x55ea066641e0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b22800 session 0x55ea05c583c0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113344512 unmapped: 13524992 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315bc00 session 0x55ea06664000
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 15441920 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155061 data_alloc: 234881024 data_used: 12271616
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 15441920 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 15441920 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 15441920 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 15441920 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 15441920 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155061 data_alloc: 234881024 data_used: 12271616
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 15441920 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 15441920 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 15441920 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 15441920 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 15441920 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155061 data_alloc: 234881024 data_used: 12271616
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 15441920 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 15441920 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 15441920 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 15441920 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 15441920 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155061 data_alloc: 234881024 data_used: 12271616
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 15441920 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 15433728 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 15433728 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 15433728 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 15433728 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155061 data_alloc: 234881024 data_used: 12271616
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 15433728 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 15433728 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 15433728 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 15433728 heap: 126869504 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea044cc400 session 0x55ea06acde00
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea04ac8800 session 0x55ea06acc5a0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b0a400 session 0x55ea069e30e0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea04ca1c00 session 0x55ea069e2780
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.104520798s of 30.352832794s, submitted: 35
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110657536 unmapped: 18382848 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315bc00 session 0x55ea04d57c20
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea044cc400 session 0x55ea04d0cf00
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea04ac8800 session 0x55ea04d0c3c0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185720 data_alloc: 234881024 data_used: 12271616
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b0a400 session 0x55ea039512c0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b23000 session 0x55ea03efa3c0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa098000/0x0/0x4ffc00000, data 0x1101d73/0x11c4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110682112 unmapped: 18358272 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110682112 unmapped: 18358272 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110682112 unmapped: 18358272 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110682112 unmapped: 18358272 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa098000/0x0/0x4ffc00000, data 0x1101dac/0x11c4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110682112 unmapped: 18358272 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185736 data_alloc: 234881024 data_used: 12271616
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110682112 unmapped: 18358272 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b23000 session 0x55ea03efad20
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110682112 unmapped: 18358272 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa098000/0x0/0x4ffc00000, data 0x1101dac/0x11c4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110804992 unmapped: 18235392 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110624768 unmapped: 18415616 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110624768 unmapped: 18415616 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200328 data_alloc: 234881024 data_used: 14434304
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110624768 unmapped: 18415616 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa098000/0x0/0x4ffc00000, data 0x1101dac/0x11c4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110624768 unmapped: 18415616 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110624768 unmapped: 18415616 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 2803 syncs, 3.86 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1606 writes, 4059 keys, 1606 commit groups, 1.0 writes per commit group, ingest: 3.17 MB, 0.01 MB/s#012Interval WAL: 1606 writes, 729 syncs, 2.20 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110624768 unmapped: 18415616 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110624768 unmapped: 18415616 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200328 data_alloc: 234881024 data_used: 14434304
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110624768 unmapped: 18415616 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa098000/0x0/0x4ffc00000, data 0x1101dac/0x11c4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110624768 unmapped: 18415616 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 110624768 unmapped: 18415616 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.779224396s of 18.846429825s, submitted: 29
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 112189440 unmapped: 16850944 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 112680960 unmapped: 16359424 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213650 data_alloc: 234881024 data_used: 14831616
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 17178624 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9edf000/0x0/0x4ffc00000, data 0x12b1dac/0x1374000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9edf000/0x0/0x4ffc00000, data 0x12b1dac/0x1374000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 17178624 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 17178624 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9edf000/0x0/0x4ffc00000, data 0x12b1dac/0x1374000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 17178624 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 17178624 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221340 data_alloc: 234881024 data_used: 14651392
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 17178624 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 17178624 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9edf000/0x0/0x4ffc00000, data 0x12b1dac/0x1374000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 17178624 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 17178624 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 17178624 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221340 data_alloc: 234881024 data_used: 14651392
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 17178624 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 17178624 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9edf000/0x0/0x4ffc00000, data 0x12b1dac/0x1374000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 17178624 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 17178624 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 17178624 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221340 data_alloc: 234881024 data_used: 14651392
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 17178624 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 17178624 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 17178624 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9edf000/0x0/0x4ffc00000, data 0x12b1dac/0x1374000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 17178624 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 17178624 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221340 data_alloc: 234881024 data_used: 14651392
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.352136612s of 22.470682144s, submitted: 49
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 17178624 heap: 129040384 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea06ef6400 session 0x55ea045e50e0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03eb0800 session 0x55ea045e4780
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea046ff800 session 0x55ea05bf70e0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05803c00 session 0x55ea06acc780
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03eb0800 session 0x55ea06accd20
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 29753344 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f935d000/0x0/0x4ffc00000, data 0x1e3cdac/0x1eff000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 29753344 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 29753344 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 29753344 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1308705 data_alloc: 234881024 data_used: 14655488
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113000448 unmapped: 29687808 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f935d000/0x0/0x4ffc00000, data 0x1e3cdac/0x1eff000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113000448 unmapped: 29687808 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea046ff800 session 0x55ea0679d4a0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113000448 unmapped: 29687808 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113016832 unmapped: 29671424 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113016832 unmapped: 29671424 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1308837 data_alloc: 234881024 data_used: 14655488
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113016832 unmapped: 29671424 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 113016832 unmapped: 29671424 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f935d000/0x0/0x4ffc00000, data 0x1e3cdac/0x1eff000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 21880832 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 21880832 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f935d000/0x0/0x4ffc00000, data 0x1e3cdac/0x1eff000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 21880832 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1380429 data_alloc: 234881024 data_used: 24752128
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 21880832 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 21880832 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f935d000/0x0/0x4ffc00000, data 0x1e3cdac/0x1eff000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120840192 unmapped: 21848064 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120840192 unmapped: 21848064 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f935d000/0x0/0x4ffc00000, data 0x1e3cdac/0x1eff000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120840192 unmapped: 21848064 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1380429 data_alloc: 234881024 data_used: 24752128
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120840192 unmapped: 21848064 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120840192 unmapped: 21848064 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f935d000/0x0/0x4ffc00000, data 0x1e3cdac/0x1eff000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.788692474s of 21.901130676s, submitted: 31
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124010496 unmapped: 18677760 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124698624 unmapped: 17989632 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126066688 unmapped: 16621568 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1460577 data_alloc: 234881024 data_used: 25886720
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8969000/0x0/0x4ffc00000, data 0x282fdac/0x28f2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124379136 unmapped: 18309120 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124379136 unmapped: 18309120 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124379136 unmapped: 18309120 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124387328 unmapped: 18300928 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124387328 unmapped: 18300928 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1459105 data_alloc: 234881024 data_used: 25972736
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8960000/0x0/0x4ffc00000, data 0x2839dac/0x28fc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124657664 unmapped: 18030592 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b23000 session 0x55ea0679de00
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea06ef6400 session 0x55ea04d56780
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124649472 unmapped: 18038784 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b0a800 session 0x55ea03e850e0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 26386432 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 26386432 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 26386432 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228581 data_alloc: 234881024 data_used: 14196736
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9ee8000/0x0/0x4ffc00000, data 0x12b1dac/0x1374000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 26386432 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 26386432 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.157867432s of 14.509943008s, submitted: 144
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315bc00 session 0x55ea03ea25a0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea044cc400 session 0x55ea03e4f680
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 27934720 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03eb0800 session 0x55ea04d563c0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9f35000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 27926528 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 27926528 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174001 data_alloc: 234881024 data_used: 11812864
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 27926528 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 27926528 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9f35000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 27926528 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 27926528 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 27926528 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174001 data_alloc: 234881024 data_used: 11812864
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 27926528 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 27926528 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9f35000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 27926528 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 27926528 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 27926528 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174001 data_alloc: 234881024 data_used: 11812864
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9f35000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 27926528 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 27926528 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 28278784 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9f35000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 28278784 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 28278784 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174001 data_alloc: 234881024 data_used: 11812864
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 28278784 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 28278784 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 28278784 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 28278784 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9f35000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 28278784 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174001 data_alloc: 234881024 data_used: 11812864
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 28278784 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 28278784 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 28278784 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.028423309s of 26.106794357s, submitted: 24
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea046ff800 session 0x55ea066654a0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b23000 session 0x55ea06c17a40
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315bc00 session 0x55ea048ffc20
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03eb0800 session 0x55ea06d60b40
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea044cc400 session 0x55ea048fed20
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 27230208 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f99f5000/0x0/0x4ffc00000, data 0x17a5d9c/0x1867000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 27230208 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259905 data_alloc: 234881024 data_used: 11812864
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 27230208 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 27230208 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f99f5000/0x0/0x4ffc00000, data 0x17a5d9c/0x1867000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea046ff800 session 0x55ea06d61680
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 115089408 unmapped: 27598848 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 115089408 unmapped: 27598848 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 117022720 unmapped: 25665536 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310329 data_alloc: 234881024 data_used: 19058688
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 24829952 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 24829952 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f99d1000/0x0/0x4ffc00000, data 0x17c9d9c/0x188b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 24829952 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 24829952 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 24829952 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321121 data_alloc: 234881024 data_used: 20672512
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 24829952 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 24829952 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 24829952 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f99d1000/0x0/0x4ffc00000, data 0x17c9d9c/0x188b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 24829952 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 24829952 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321577 data_alloc: 234881024 data_used: 20684800
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.084711075s of 17.252138138s, submitted: 53
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124837888 unmapped: 17850368 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f908f000/0x0/0x4ffc00000, data 0x210bd9c/0x21cd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 125198336 unmapped: 17489920 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 18743296 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124018688 unmapped: 18669568 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9006000/0x0/0x4ffc00000, data 0x2193d9c/0x2255000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 18636800 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1414765 data_alloc: 234881024 data_used: 22802432
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 18636800 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 18636800 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9006000/0x0/0x4ffc00000, data 0x2193d9c/0x2255000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 18604032 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9006000/0x0/0x4ffc00000, data 0x2193d9c/0x2255000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 18604032 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1412829 data_alloc: 234881024 data_used: 22814720
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 18604032 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8fe6000/0x0/0x4ffc00000, data 0x21b4d9c/0x2276000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124092416 unmapped: 18595840 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124092416 unmapped: 18595840 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8fe6000/0x0/0x4ffc00000, data 0x21b4d9c/0x2276000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124092416 unmapped: 18595840 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8fe6000/0x0/0x4ffc00000, data 0x21b4d9c/0x2276000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.732014656s of 13.976600647s, submitted: 118
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124092416 unmapped: 18595840 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1413077 data_alloc: 234881024 data_used: 22814720
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124092416 unmapped: 18595840 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124092416 unmapped: 18595840 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8fe0000/0x0/0x4ffc00000, data 0x21bad9c/0x227c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8fe0000/0x0/0x4ffc00000, data 0x21bad9c/0x227c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124092416 unmapped: 18595840 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8fe0000/0x0/0x4ffc00000, data 0x21bad9c/0x227c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124100608 unmapped: 18587648 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124100608 unmapped: 18587648 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1414141 data_alloc: 234881024 data_used: 22843392
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124108800 unmapped: 18579456 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8fe0000/0x0/0x4ffc00000, data 0x21bad9c/0x227c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124256256 unmapped: 18432000 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124256256 unmapped: 18432000 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124256256 unmapped: 18432000 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124256256 unmapped: 18432000 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.906300545s of 10.919371605s, submitted: 4
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417089 data_alloc: 234881024 data_used: 22843392
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b25800 session 0x55ea04ce3860
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315bc00 session 0x55ea04ce3a40
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03eb0800 session 0x55ea04ce34a0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea044cc400 session 0x55ea04ce2000
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea046ff800 session 0x55ea04ce3e00
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124624896 unmapped: 18063360 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124624896 unmapped: 18063360 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f85dd000/0x0/0x4ffc00000, data 0x2bbdd9c/0x2c7f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 18055168 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea04a11000 session 0x55ea04ce32c0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 18055168 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315bc00 session 0x55ea045e4960
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03eb0800 session 0x55ea06665860
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 18055168 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea044cc400 session 0x55ea069e3c20
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1490465 data_alloc: 234881024 data_used: 22843392
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 18046976 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f85dc000/0x0/0x4ffc00000, data 0x2bbddac/0x2c80000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 124968960 unmapped: 17719296 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 133939200 unmapped: 8749056 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 8658944 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 8658944 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f85dc000/0x0/0x4ffc00000, data 0x2bbddac/0x2c80000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1560385 data_alloc: 251658240 data_used: 33210368
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 134062080 unmapped: 8626176 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 134062080 unmapped: 8626176 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f85dc000/0x0/0x4ffc00000, data 0x2bbddac/0x2c80000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.262975693s of 12.345156670s, submitted: 14
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 134209536 unmapped: 8478720 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 134209536 unmapped: 8478720 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f85d8000/0x0/0x4ffc00000, data 0x2bc1dac/0x2c84000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 134242304 unmapped: 8445952 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f85d8000/0x0/0x4ffc00000, data 0x2bc1dac/0x2c84000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1560113 data_alloc: 251658240 data_used: 33210368
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 134275072 unmapped: 8413184 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 134291456 unmapped: 8396800 heap: 142688256 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 138690560 unmapped: 6103040 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 138829824 unmapped: 5963776 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139681792 unmapped: 5111808 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7e54000/0x0/0x4ffc00000, data 0x333ddac/0x3400000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1634445 data_alloc: 251658240 data_used: 33914880
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139124736 unmapped: 5668864 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea044cd000 session 0x55ea03efbe00
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139132928 unmapped: 5660672 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139132928 unmapped: 5660672 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139132928 unmapped: 5660672 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.642090797s of 11.800820351s, submitted: 440
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7e5c000/0x0/0x4ffc00000, data 0x333ddac/0x3400000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139132928 unmapped: 5660672 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1635745 data_alloc: 251658240 data_used: 33914880
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139132928 unmapped: 5660672 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139132928 unmapped: 5660672 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139141120 unmapped: 5652480 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139141120 unmapped: 5652480 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139141120 unmapped: 5652480 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7e56000/0x0/0x4ffc00000, data 0x3343dac/0x3406000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1635745 data_alloc: 251658240 data_used: 33914880
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139141120 unmapped: 5652480 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139141120 unmapped: 5652480 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139141120 unmapped: 5652480 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139141120 unmapped: 5652480 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139149312 unmapped: 5644288 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1635345 data_alloc: 251658240 data_used: 33914880
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7e51000/0x0/0x4ffc00000, data 0x3348dac/0x340b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139149312 unmapped: 5644288 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.756099701s of 11.783482552s, submitted: 8
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139149312 unmapped: 5644288 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7e4c000/0x0/0x4ffc00000, data 0x334ddac/0x3410000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139149312 unmapped: 5644288 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139149312 unmapped: 5644288 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139149312 unmapped: 5644288 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1635385 data_alloc: 251658240 data_used: 33914880
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139149312 unmapped: 5644288 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139173888 unmapped: 5619712 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7e49000/0x0/0x4ffc00000, data 0x3350dac/0x3413000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139173888 unmapped: 5619712 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139182080 unmapped: 5611520 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139182080 unmapped: 5611520 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1635677 data_alloc: 251658240 data_used: 33914880
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139182080 unmapped: 5611520 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.557255745s of 10.573647499s, submitted: 4
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7e49000/0x0/0x4ffc00000, data 0x3350dac/0x3413000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139182080 unmapped: 5611520 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139182080 unmapped: 5611520 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139182080 unmapped: 5611520 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139190272 unmapped: 5603328 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1636225 data_alloc: 251658240 data_used: 33914880
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139190272 unmapped: 5603328 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7e44000/0x0/0x4ffc00000, data 0x3354dac/0x3417000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139190272 unmapped: 5603328 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139190272 unmapped: 5603328 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7e44000/0x0/0x4ffc00000, data 0x3354dac/0x3417000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139214848 unmapped: 5578752 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139214848 unmapped: 5578752 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1635993 data_alloc: 251658240 data_used: 33914880
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139214848 unmapped: 5578752 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7e40000/0x0/0x4ffc00000, data 0x3359dac/0x341c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139214848 unmapped: 5578752 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139223040 unmapped: 5570560 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139223040 unmapped: 5570560 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.444334030s of 12.466792107s, submitted: 6
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139223040 unmapped: 5570560 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7e3d000/0x0/0x4ffc00000, data 0x335cdac/0x341f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1636017 data_alloc: 251658240 data_used: 33914880
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139223040 unmapped: 5570560 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139231232 unmapped: 5562368 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139231232 unmapped: 5562368 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139231232 unmapped: 5562368 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139231232 unmapped: 5562368 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1636169 data_alloc: 251658240 data_used: 33914880
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139231232 unmapped: 5562368 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7e3a000/0x0/0x4ffc00000, data 0x335fdac/0x3422000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139239424 unmapped: 5554176 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea046ff800 session 0x55ea06acd2c0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b0b800 session 0x55ea04d57a40
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b0b800 session 0x55ea03951680
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 131776512 unmapped: 13017088 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8fab000/0x0/0x4ffc00000, data 0x21efd9c/0x22b1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 131776512 unmapped: 13017088 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 131776512 unmapped: 13017088 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.946741104s of 11.008224487s, submitted: 24
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1428990 data_alloc: 234881024 data_used: 22908928
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 12976128 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 12976128 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 12976128 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8fa6000/0x0/0x4ffc00000, data 0x21f4d9c/0x22b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 12976128 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 12976128 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8fa6000/0x0/0x4ffc00000, data 0x21f4d9c/0x22b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1428990 data_alloc: 234881024 data_used: 22908928
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8fa6000/0x0/0x4ffc00000, data 0x21f4d9c/0x22b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 12976128 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 12976128 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8fa6000/0x0/0x4ffc00000, data 0x21f4d9c/0x22b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 12976128 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea06ef6400 session 0x55ea06d610e0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05c63800 session 0x55ea045e5e00
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 131833856 unmapped: 12959744 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 121495552 unmapped: 23298048 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.918642998s of 10.027234077s, submitted: 43
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315bc00 session 0x55ea069e30e0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:37:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:37:55.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198332 data_alloc: 234881024 data_used: 11812864
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198332 data_alloc: 234881024 data_used: 11812864
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198332 data_alloc: 234881024 data_used: 11812864
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198332 data_alloc: 234881024 data_used: 11812864
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198332 data_alloc: 234881024 data_used: 11812864
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198332 data_alloc: 234881024 data_used: 11812864
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 23961600 heap: 144793600 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.011581421s of 28.023063660s, submitted: 4
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03eb0800 session 0x55ea03efa5a0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03eb0800 session 0x55ea06ab2780
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315bc00 session 0x55ea069e2b40
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b0b800 session 0x55ea06947680
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05c63800 session 0x55ea03ea4f00
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120397824 unmapped: 26509312 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120397824 unmapped: 26509312 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9f30000/0x0/0x4ffc00000, data 0x126bd3a/0x132c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234174 data_alloc: 234881024 data_used: 11812864
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9f30000/0x0/0x4ffc00000, data 0x126bd3a/0x132c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea06ef6400 session 0x55ea03ea4b40
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120397824 unmapped: 26509312 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea06ef6400 session 0x55ea04d4fe00
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315bc00 session 0x55ea068465a0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120397824 unmapped: 26509312 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03eb0800 session 0x55ea069e3c20
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 26148864 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9f0b000/0x0/0x4ffc00000, data 0x128fd4a/0x1351000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 26148864 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 26148864 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9f0b000/0x0/0x4ffc00000, data 0x128fd4a/0x1351000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1249600 data_alloc: 234881024 data_used: 13393920
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 26148864 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 26148864 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9f0b000/0x0/0x4ffc00000, data 0x128fd4a/0x1351000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 26148864 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 26148864 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 26148864 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b0b800 session 0x55ea069e32c0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05c63800 session 0x55ea069463c0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263432 data_alloc: 234881024 data_used: 15491072
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.155310631s of 12.628594398s, submitted: 11
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315bc00 session 0x55ea039505a0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119816192 unmapped: 27090944 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea04ac9c00 session 0x55ea06acda40
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea04a10800 session 0x55ea06664d20
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119816192 unmapped: 27090944 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119816192 unmapped: 27090944 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119816192 unmapped: 27090944 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119816192 unmapped: 27090944 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203077 data_alloc: 234881024 data_used: 11812864
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119816192 unmapped: 27090944 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119816192 unmapped: 27090944 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119816192 unmapped: 27090944 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119816192 unmapped: 27090944 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119816192 unmapped: 27090944 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203077 data_alloc: 234881024 data_used: 11812864
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119816192 unmapped: 27090944 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119816192 unmapped: 27090944 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119816192 unmapped: 27090944 heap: 146907136 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.648719788s of 12.687404633s, submitted: 13
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b0b800 session 0x55ea045e4780
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea06ef6400 session 0x55ea06c170e0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea044cc400 session 0x55ea06c16000
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea046ff800 session 0x55ea06c174a0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea046ff800 session 0x55ea06c16b40
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f993b000/0x0/0x4ffc00000, data 0x1860d3a/0x1921000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 31768576 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 31768576 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1285125 data_alloc: 234881024 data_used: 11812864
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120389632 unmapped: 31760384 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f993b000/0x0/0x4ffc00000, data 0x1860d3a/0x1921000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120389632 unmapped: 31760384 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315bc00 session 0x55ea04ce3a40
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120389632 unmapped: 31760384 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120389632 unmapped: 31760384 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120389632 unmapped: 31760384 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1285125 data_alloc: 234881024 data_used: 11812864
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f993b000/0x0/0x4ffc00000, data 0x1860d3a/0x1921000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 31858688 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 123043840 unmapped: 29106176 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 123043840 unmapped: 29106176 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 123043840 unmapped: 29106176 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f993b000/0x0/0x4ffc00000, data 0x1860d3a/0x1921000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea044cc400 session 0x55ea04ce34a0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b0b800 session 0x55ea04d565a0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f993b000/0x0/0x4ffc00000, data 0x1860d3a/0x1921000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 123043840 unmapped: 29106176 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.941737175s of 12.008414268s, submitted: 14
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209885 data_alloc: 234881024 data_used: 11812864
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b22000 session 0x55ea03e84b40
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209885 data_alloc: 234881024 data_used: 11812864
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209885 data_alloc: 234881024 data_used: 11812864
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209885 data_alloc: 234881024 data_used: 11812864
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209885 data_alloc: 234881024 data_used: 11812864
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209885 data_alloc: 234881024 data_used: 11812864
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209885 data_alloc: 234881024 data_used: 11812864
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 32874496 heap: 152150016 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 31.909063339s of 32.147178650s, submitted: 15
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b22000 session 0x55ea03e850e0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315bc00 session 0x55ea03ea3680
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea044cc400 session 0x55ea03ea34a0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119357440 unmapped: 44343296 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea046ff800 session 0x55ea03ea25a0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b0b800 session 0x55ea03ea30e0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119357440 unmapped: 44343296 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119357440 unmapped: 44343296 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283741 data_alloc: 234881024 data_used: 11812864
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b0b800 session 0x55ea03ea2d20
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119357440 unmapped: 44343296 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f98fa000/0x0/0x4ffc00000, data 0x18a1d3a/0x1962000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315bc00 session 0x55ea05bf34a0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea044cc400 session 0x55ea05bf2960
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119365632 unmapped: 44335104 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea046ff800 session 0x55ea069e25a0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119668736 unmapped: 44032000 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 44015616 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 120930304 unmapped: 42770432 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f98d5000/0x0/0x4ffc00000, data 0x18c5d4a/0x1987000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1364823 data_alloc: 234881024 data_used: 22085632
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f98d5000/0x0/0x4ffc00000, data 0x18c5d4a/0x1987000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 42221568 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 42221568 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 42221568 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 42221568 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f98d5000/0x0/0x4ffc00000, data 0x18c5d4a/0x1987000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 42221568 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1364823 data_alloc: 234881024 data_used: 22085632
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 42221568 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 42221568 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f98d5000/0x0/0x4ffc00000, data 0x18c5d4a/0x1987000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 42221568 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f98d5000/0x0/0x4ffc00000, data 0x18c5d4a/0x1987000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 42221568 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 42221568 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.289739609s of 17.994153976s, submitted: 9
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1399031 data_alloc: 234881024 data_used: 22175744
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 125911040 unmapped: 37789696 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 125911040 unmapped: 37789696 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9493000/0x0/0x4ffc00000, data 0x1d07d4a/0x1dc9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [0,0,0,0,0,2,0,0,4])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9493000/0x0/0x4ffc00000, data 0x1d07d4a/0x1dc9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 125960192 unmapped: 37740544 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 125976576 unmapped: 37724160 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 125976576 unmapped: 37724160 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9487000/0x0/0x4ffc00000, data 0x1d13d4a/0x1dd5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1408059 data_alloc: 234881024 data_used: 22564864
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 125976576 unmapped: 37724160 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9487000/0x0/0x4ffc00000, data 0x1d13d4a/0x1dd5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 125976576 unmapped: 37724160 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 125976576 unmapped: 37724160 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 125976576 unmapped: 37724160 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9487000/0x0/0x4ffc00000, data 0x1d13d4a/0x1dd5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 125976576 unmapped: 37724160 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1408059 data_alloc: 234881024 data_used: 22564864
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 125976576 unmapped: 37724160 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 125976576 unmapped: 37724160 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 125976576 unmapped: 37724160 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 125976576 unmapped: 37724160 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b22000 session 0x55ea069e2000
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03f3ec00 session 0x55ea04d0c5a0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 125976576 unmapped: 37724160 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1408059 data_alloc: 234881024 data_used: 22564864
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.056972504s of 15.507387161s, submitted: 28
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9487000/0x0/0x4ffc00000, data 0x1d13d4a/0x1dd5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa39f000/0x0/0x4ffc00000, data 0xdfbd4a/0xebd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b22000 session 0x55ea069e34a0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219886 data_alloc: 234881024 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219886 data_alloc: 234881024 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219886 data_alloc: 234881024 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219886 data_alloc: 234881024 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219886 data_alloc: 234881024 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 43991040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.235197067s of 26.320636749s, submitted: 18
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315bc00 session 0x55ea03e845a0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 118865920 unmapped: 44834816 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea044cc400 session 0x55ea03ea4f00
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea046ff800 session 0x55ea06c8be00
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea046ff800 session 0x55ea03ed32c0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315bc00 session 0x55ea03e843c0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 118865920 unmapped: 44834816 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9880000/0x0/0x4ffc00000, data 0x191bd3a/0x19dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 118865920 unmapped: 44834816 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 118865920 unmapped: 44834816 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306858 data_alloc: 234881024 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 118865920 unmapped: 44834816 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 118865920 unmapped: 44834816 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03f3ec00 session 0x55ea069e2780
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 118865920 unmapped: 44834816 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 118865920 unmapped: 44834816 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f987f000/0x0/0x4ffc00000, data 0x191bd5d/0x19dd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 121454592 unmapped: 42246144 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1380167 data_alloc: 234881024 data_used: 21360640
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 122642432 unmapped: 41058304 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f987f000/0x0/0x4ffc00000, data 0x191bd5d/0x19dd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 122642432 unmapped: 41058304 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 122642432 unmapped: 41058304 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 122642432 unmapped: 41058304 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 122642432 unmapped: 41058304 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1380167 data_alloc: 234881024 data_used: 21360640
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f987f000/0x0/0x4ffc00000, data 0x191bd5d/0x19dd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 122642432 unmapped: 41058304 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 122642432 unmapped: 41058304 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f987f000/0x0/0x4ffc00000, data 0x191bd5d/0x19dd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 122642432 unmapped: 41058304 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 122642432 unmapped: 41058304 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 122642432 unmapped: 41058304 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f987f000/0x0/0x4ffc00000, data 0x191bd5d/0x19dd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.510463715s of 18.653633118s, submitted: 18
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1462413 data_alloc: 234881024 data_used: 21397504
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 130695168 unmapped: 33005568 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 130891776 unmapped: 32808960 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 129646592 unmapped: 34054144 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b0b800 session 0x55ea045be3c0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea06ef9400 session 0x55ea06acd4a0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 130170880 unmapped: 33529856 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315bc00 session 0x55ea05bf3a40
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03f3ec00 session 0x55ea06847c20
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea046ff800 session 0x55ea06946d20
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 130187264 unmapped: 33513472 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8764000/0x0/0x4ffc00000, data 0x2a36d5d/0x2af8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1526265 data_alloc: 234881024 data_used: 21716992
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 130187264 unmapped: 33513472 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8764000/0x0/0x4ffc00000, data 0x2a36d5d/0x2af8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 130220032 unmapped: 33480704 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 130220032 unmapped: 33480704 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 130220032 unmapped: 33480704 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b0b800 session 0x55ea03efab40
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 130220032 unmapped: 33480704 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1526281 data_alloc: 234881024 data_used: 21716992
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 130220032 unmapped: 33480704 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8764000/0x0/0x4ffc00000, data 0x2a36d5d/0x2af8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8764000/0x0/0x4ffc00000, data 0x2a36d5d/0x2af8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 130359296 unmapped: 33341440 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 30244864 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 30244864 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8764000/0x0/0x4ffc00000, data 0x2a36d5d/0x2af8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 133578752 unmapped: 30121984 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1560633 data_alloc: 234881024 data_used: 26701824
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 133578752 unmapped: 30121984 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 133578752 unmapped: 30121984 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 133578752 unmapped: 30121984 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 133578752 unmapped: 30121984 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8764000/0x0/0x4ffc00000, data 0x2a36d5d/0x2af8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 133611520 unmapped: 30089216 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1560633 data_alloc: 234881024 data_used: 26701824
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 133611520 unmapped: 30089216 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 133611520 unmapped: 30089216 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.776948929s of 22.069581985s, submitted: 105
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 24207360 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7a4b000/0x0/0x4ffc00000, data 0x374fd5d/0x3811000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 139476992 unmapped: 24223744 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 140001280 unmapped: 23699456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1667361 data_alloc: 251658240 data_used: 28028928
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 140001280 unmapped: 23699456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 140001280 unmapped: 23699456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f799e000/0x0/0x4ffc00000, data 0x37fbd5d/0x38bd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 140001280 unmapped: 23699456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 140001280 unmapped: 23699456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 140156928 unmapped: 23543808 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f797e000/0x0/0x4ffc00000, data 0x381cd5d/0x38de000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1664897 data_alloc: 251658240 data_used: 28028928
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 23535616 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 23535616 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f797e000/0x0/0x4ffc00000, data 0x381cd5d/0x38de000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.803740501s of 10.597007751s, submitted: 132
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 23535616 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f797e000/0x0/0x4ffc00000, data 0x381cd5d/0x38de000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 23535616 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f797c000/0x0/0x4ffc00000, data 0x381dd5d/0x38df000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 140173312 unmapped: 23527424 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1665233 data_alloc: 251658240 data_used: 28028928
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 140345344 unmapped: 23355392 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 140345344 unmapped: 23355392 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea06ef6000 session 0x55ea05c58f00
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea06a38c00 session 0x55ea04d0c780
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f796e000/0x0/0x4ffc00000, data 0x382cd5d/0x38ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 23371776 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315bc00 session 0x55ea05bf3680
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 134799360 unmapped: 28901376 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 134799360 unmapped: 28901376 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1479151 data_alloc: 234881024 data_used: 21716992
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 134799360 unmapped: 28901376 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8d96000/0x0/0x4ffc00000, data 0x2404d5d/0x24c6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea044cc400 session 0x55ea03d9ef00
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b22000 session 0x55ea06ab2b40
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 134799360 unmapped: 28901376 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.386573792s of 10.000718117s, submitted: 44
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8d96000/0x0/0x4ffc00000, data 0x2404d5d/0x24c6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128729088 unmapped: 34971648 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03f3ec00 session 0x55ea04d0b680
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c3000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246023 data_alloc: 234881024 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c3000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c3000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246023 data_alloc: 234881024 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c3000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246023 data_alloc: 234881024 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c3000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246023 data_alloc: 234881024 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c3000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa3c3000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246023 data_alloc: 234881024 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03f3ec00 session 0x55ea06d60b40
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128737280 unmapped: 34963456 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315bc00 session 0x55ea04ced4a0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea044cc400 session 0x55ea0679cb40
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b22000 session 0x55ea03efa000
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.284482956s of 24.468833923s, submitted: 10
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea06a38c00 session 0x55ea04d0c1e0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea06a38c00 session 0x55ea05bf23c0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315bc00 session 0x55ea0679c3c0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea03f3ec00 session 0x55ea048ff680
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea044cc400 session 0x55ea06d605a0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127016960 unmapped: 36683776 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127016960 unmapped: 36683776 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9cfe000/0x0/0x4ffc00000, data 0x149bdac/0x155e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127016960 unmapped: 36683776 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9cfe000/0x0/0x4ffc00000, data 0x149bdac/0x155e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305495 data_alloc: 234881024 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127016960 unmapped: 36683776 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127016960 unmapped: 36683776 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127016960 unmapped: 36683776 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b22000 session 0x55ea048fe5a0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127328256 unmapped: 36372480 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9cd9000/0x0/0x4ffc00000, data 0x14bfdcf/0x1583000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 36560896 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1346676 data_alloc: 234881024 data_used: 16285696
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 36560896 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 36560896 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 36560896 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 36560896 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 36560896 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1349260 data_alloc: 234881024 data_used: 16650240
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9cd9000/0x0/0x4ffc00000, data 0x14bfdcf/0x1583000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 36560896 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 36560896 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 36560896 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 36560896 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 36560896 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.059680939s of 18.250682831s, submitted: 49
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387840 data_alloc: 234881024 data_used: 16707584
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 131407872 unmapped: 32292864 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f942d000/0x0/0x4ffc00000, data 0x1953dcf/0x1a17000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 129753088 unmapped: 33947648 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 129753088 unmapped: 33947648 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9425000/0x0/0x4ffc00000, data 0x1962dcf/0x1a26000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 129753088 unmapped: 33947648 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 129753088 unmapped: 33947648 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1395610 data_alloc: 234881024 data_used: 17113088
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 129753088 unmapped: 33947648 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 129753088 unmapped: 33947648 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9425000/0x0/0x4ffc00000, data 0x1962dcf/0x1a26000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 130801664 unmapped: 32899072 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 130801664 unmapped: 32899072 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 130801664 unmapped: 32899072 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9425000/0x0/0x4ffc00000, data 0x1962dcf/0x1a26000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1395626 data_alloc: 234881024 data_used: 17113088
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 130801664 unmapped: 32899072 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 130801664 unmapped: 32899072 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 130801664 unmapped: 32899072 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 130801664 unmapped: 32899072 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9425000/0x0/0x4ffc00000, data 0x1962dcf/0x1a26000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 130809856 unmapped: 32890880 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.912492752s of 15.101532936s, submitted: 72
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b22000 session 0x55ea04d4e1e0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea0315bc00 session 0x55ea06ab2d20
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1393482 data_alloc: 234881024 data_used: 17117184
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 130809856 unmapped: 32890880 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea044cc400 session 0x55ea03ed32c0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 234881024 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 234881024 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 234881024 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 234881024 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 36831232 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 36823040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 36823040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 234881024 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 36823040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 36823040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 36823040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 36823040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 36823040 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 234881024 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 234881024 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 234881024 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 234881024 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 234881024 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 234881024 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 234881024 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 234881024 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 36814848 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 234881024 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126902272 unmapped: 36798464 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126902272 unmapped: 36798464 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126902272 unmapped: 36798464 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126902272 unmapped: 36798464 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126902272 unmapped: 36798464 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 234881024 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126902272 unmapped: 36798464 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126902272 unmapped: 36798464 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126902272 unmapped: 36798464 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126910464 unmapped: 36790272 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126910464 unmapped: 36790272 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 234881024 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126910464 unmapped: 36790272 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126910464 unmapped: 36790272 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126910464 unmapped: 36790272 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126910464 unmapped: 36790272 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126910464 unmapped: 36790272 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126959616 unmapped: 36741120 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: do_command 'config diff' '{prefix=config diff}'
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: do_command 'config show' '{prefix=config show}'
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: do_command 'counter dump' '{prefix=counter dump}'
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: do_command 'counter schema' '{prefix=counter schema}'
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127180800 unmapped: 36519936 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 126656512 unmapped: 37044224 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: do_command 'log dump' '{prefix=log dump}'
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 25960448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: do_command 'perf dump' '{prefix=perf dump}'
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: do_command 'perf schema' '{prefix=perf schema}'
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127025152 unmapped: 36675584 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127025152 unmapped: 36675584 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127025152 unmapped: 36675584 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127025152 unmapped: 36675584 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127025152 unmapped: 36675584 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127025152 unmapped: 36675584 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127025152 unmapped: 36675584 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127025152 unmapped: 36675584 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127025152 unmapped: 36675584 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127025152 unmapped: 36675584 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127025152 unmapped: 36675584 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 36667392 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 36667392 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 36667392 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 36667392 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 36667392 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 36667392 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 36667392 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 36667392 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 36667392 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 36667392 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 13K writes, 49K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 13K writes, 3865 syncs, 3.48 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2619 writes, 9599 keys, 2619 commit groups, 1.0 writes per commit group, ingest: 9.93 MB, 0.02 MB/s#012Interval WAL: 2619 writes, 1062 syncs, 2.47 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 36667392 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 36667392 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127041536 unmapped: 36659200 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127041536 unmapped: 36659200 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127041536 unmapped: 36659200 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127041536 unmapped: 36659200 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127041536 unmapped: 36659200 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127041536 unmapped: 36659200 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127041536 unmapped: 36659200 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127041536 unmapped: 36659200 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127041536 unmapped: 36659200 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127041536 unmapped: 36659200 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127041536 unmapped: 36659200 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127041536 unmapped: 36659200 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127041536 unmapped: 36659200 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127041536 unmapped: 36659200 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127041536 unmapped: 36659200 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127041536 unmapped: 36659200 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127041536 unmapped: 36659200 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127041536 unmapped: 36659200 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127041536 unmapped: 36659200 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127041536 unmapped: 36659200 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127041536 unmapped: 36659200 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127041536 unmapped: 36659200 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127041536 unmapped: 36659200 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127041536 unmapped: 36659200 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127041536 unmapped: 36659200 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127041536 unmapped: 36659200 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127041536 unmapped: 36659200 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127041536 unmapped: 36659200 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127049728 unmapped: 36651008 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127049728 unmapped: 36651008 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127049728 unmapped: 36651008 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127049728 unmapped: 36651008 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127049728 unmapped: 36651008 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127049728 unmapped: 36651008 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127049728 unmapped: 36651008 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127049728 unmapped: 36651008 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127049728 unmapped: 36651008 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127049728 unmapped: 36651008 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127057920 unmapped: 36642816 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127057920 unmapped: 36642816 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127057920 unmapped: 36642816 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127057920 unmapped: 36642816 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127057920 unmapped: 36642816 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127057920 unmapped: 36642816 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127057920 unmapped: 36642816 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127057920 unmapped: 36642816 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127057920 unmapped: 36642816 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127057920 unmapped: 36642816 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127057920 unmapped: 36642816 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127057920 unmapped: 36642816 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127057920 unmapped: 36642816 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127057920 unmapped: 36642816 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127066112 unmapped: 36634624 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127066112 unmapped: 36634624 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127066112 unmapped: 36634624 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127066112 unmapped: 36634624 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127066112 unmapped: 36634624 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127066112 unmapped: 36634624 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127066112 unmapped: 36634624 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127066112 unmapped: 36634624 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127066112 unmapped: 36634624 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127066112 unmapped: 36634624 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127074304 unmapped: 36626432 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127074304 unmapped: 36626432 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127074304 unmapped: 36626432 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127074304 unmapped: 36626432 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127074304 unmapped: 36626432 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127082496 unmapped: 36618240 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127082496 unmapped: 36618240 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127082496 unmapped: 36618240 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127082496 unmapped: 36618240 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127082496 unmapped: 36618240 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127082496 unmapped: 36618240 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127082496 unmapped: 36618240 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127082496 unmapped: 36618240 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127090688 unmapped: 36610048 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127090688 unmapped: 36610048 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127090688 unmapped: 36610048 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127090688 unmapped: 36610048 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127090688 unmapped: 36610048 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127090688 unmapped: 36610048 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127090688 unmapped: 36610048 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127090688 unmapped: 36610048 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127090688 unmapped: 36610048 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127090688 unmapped: 36610048 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127090688 unmapped: 36610048 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127090688 unmapped: 36610048 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127090688 unmapped: 36610048 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127090688 unmapped: 36610048 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127090688 unmapped: 36610048 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127090688 unmapped: 36610048 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127098880 unmapped: 36601856 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127098880 unmapped: 36601856 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127107072 unmapped: 36593664 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127107072 unmapped: 36593664 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127107072 unmapped: 36593664 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127107072 unmapped: 36593664 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127107072 unmapped: 36593664 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127107072 unmapped: 36593664 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127107072 unmapped: 36593664 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127107072 unmapped: 36593664 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127115264 unmapped: 36585472 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127115264 unmapped: 36585472 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127115264 unmapped: 36585472 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127115264 unmapped: 36585472 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127115264 unmapped: 36585472 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127115264 unmapped: 36585472 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127115264 unmapped: 36585472 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127115264 unmapped: 36585472 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127115264 unmapped: 36585472 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127115264 unmapped: 36585472 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127115264 unmapped: 36585472 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127115264 unmapped: 36585472 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127115264 unmapped: 36585472 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127115264 unmapped: 36585472 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127123456 unmapped: 36577280 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127123456 unmapped: 36577280 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127123456 unmapped: 36577280 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127123456 unmapped: 36577280 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127123456 unmapped: 36577280 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127123456 unmapped: 36577280 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127123456 unmapped: 36577280 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127123456 unmapped: 36577280 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127123456 unmapped: 36577280 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127123456 unmapped: 36577280 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127123456 unmapped: 36577280 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127123456 unmapped: 36577280 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127131648 unmapped: 36569088 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127131648 unmapped: 36569088 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127131648 unmapped: 36569088 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127131648 unmapped: 36569088 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127131648 unmapped: 36569088 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127131648 unmapped: 36569088 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127131648 unmapped: 36569088 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127131648 unmapped: 36569088 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127131648 unmapped: 36569088 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127131648 unmapped: 36569088 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127131648 unmapped: 36569088 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127131648 unmapped: 36569088 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127131648 unmapped: 36569088 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127131648 unmapped: 36569088 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 36560896 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 36560896 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258664 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 36560896 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 36560896 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 36560896 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 36560896 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9fa9000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 36560896 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 260.576873779s of 261.004852295s, submitted: 53
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258167 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127180800 unmapped: 36519936 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127238144 unmapped: 36462592 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 127287296 unmapped: 36413440 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128385024 unmapped: 35315712 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128434176 unmapped: 35266560 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128450560 unmapped: 35250176 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128450560 unmapped: 35250176 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128458752 unmapped: 35241984 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128458752 unmapped: 35241984 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 35233792 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 35233792 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 35233792 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 35233792 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 35233792 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 35233792 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 35233792 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 35233792 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128475136 unmapped: 35225600 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128475136 unmapped: 35225600 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128475136 unmapped: 35225600 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128475136 unmapped: 35225600 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128475136 unmapped: 35225600 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128475136 unmapped: 35225600 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 35217408 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 35217408 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 35217408 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 35217408 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128491520 unmapped: 35209216 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128491520 unmapped: 35209216 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128491520 unmapped: 35209216 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128491520 unmapped: 35209216 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128491520 unmapped: 35209216 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128491520 unmapped: 35209216 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128499712 unmapped: 35201024 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128499712 unmapped: 35201024 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128499712 unmapped: 35201024 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128499712 unmapped: 35201024 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128499712 unmapped: 35201024 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128499712 unmapped: 35201024 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128499712 unmapped: 35201024 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128499712 unmapped: 35201024 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128507904 unmapped: 35192832 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128507904 unmapped: 35192832 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128507904 unmapped: 35192832 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128507904 unmapped: 35192832 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128507904 unmapped: 35192832 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128507904 unmapped: 35192832 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128507904 unmapped: 35192832 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128507904 unmapped: 35192832 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128507904 unmapped: 35192832 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128507904 unmapped: 35192832 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128507904 unmapped: 35192832 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128507904 unmapped: 35192832 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128507904 unmapped: 35192832 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128507904 unmapped: 35192832 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128507904 unmapped: 35192832 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128507904 unmapped: 35192832 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128507904 unmapped: 35192832 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128516096 unmapped: 35184640 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128516096 unmapped: 35184640 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128516096 unmapped: 35184640 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128516096 unmapped: 35184640 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128516096 unmapped: 35184640 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128516096 unmapped: 35184640 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128516096 unmapped: 35184640 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128516096 unmapped: 35184640 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128516096 unmapped: 35184640 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128516096 unmapped: 35184640 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128516096 unmapped: 35184640 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128516096 unmapped: 35184640 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128516096 unmapped: 35184640 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128516096 unmapped: 35184640 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128516096 unmapped: 35184640 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27572 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128524288 unmapped: 35176448 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128532480 unmapped: 35168256 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128532480 unmapped: 35168256 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128532480 unmapped: 35168256 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128532480 unmapped: 35168256 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128532480 unmapped: 35168256 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128532480 unmapped: 35168256 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128532480 unmapped: 35168256 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128532480 unmapped: 35168256 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128532480 unmapped: 35168256 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128532480 unmapped: 35168256 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128540672 unmapped: 35160064 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128540672 unmapped: 35160064 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128540672 unmapped: 35160064 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128540672 unmapped: 35160064 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128540672 unmapped: 35160064 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128540672 unmapped: 35160064 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128540672 unmapped: 35160064 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128540672 unmapped: 35160064 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128540672 unmapped: 35160064 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128540672 unmapped: 35160064 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128540672 unmapped: 35160064 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128540672 unmapped: 35160064 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128540672 unmapped: 35160064 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128540672 unmapped: 35160064 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128540672 unmapped: 35160064 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128540672 unmapped: 35160064 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128540672 unmapped: 35160064 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128540672 unmapped: 35160064 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128540672 unmapped: 35160064 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128540672 unmapped: 35160064 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128540672 unmapped: 35160064 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128540672 unmapped: 35160064 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128540672 unmapped: 35160064 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128548864 unmapped: 35151872 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128548864 unmapped: 35151872 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128548864 unmapped: 35151872 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128548864 unmapped: 35151872 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128548864 unmapped: 35151872 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128548864 unmapped: 35151872 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128548864 unmapped: 35151872 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128548864 unmapped: 35151872 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128548864 unmapped: 35151872 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128548864 unmapped: 35151872 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128548864 unmapped: 35151872 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128548864 unmapped: 35151872 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128557056 unmapped: 35143680 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128557056 unmapped: 35143680 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128557056 unmapped: 35143680 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128557056 unmapped: 35143680 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128557056 unmapped: 35143680 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128557056 unmapped: 35143680 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128557056 unmapped: 35143680 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128557056 unmapped: 35143680 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128557056 unmapped: 35143680 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128557056 unmapped: 35143680 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128557056 unmapped: 35143680 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128557056 unmapped: 35143680 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128557056 unmapped: 35143680 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 35135488 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 35135488 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 35135488 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 35135488 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 35135488 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 35135488 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 35135488 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 35135488 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 35135488 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 35135488 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 35135488 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 35127296 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 35127296 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 35127296 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 35127296 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 35127296 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 35127296 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 35127296 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 35127296 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 35127296 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 35127296 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 35127296 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 35127296 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 35127296 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128581632 unmapped: 35119104 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128581632 unmapped: 35119104 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128581632 unmapped: 35119104 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128581632 unmapped: 35119104 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128581632 unmapped: 35119104 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128581632 unmapped: 35119104 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128581632 unmapped: 35119104 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128581632 unmapped: 35119104 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27052 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128581632 unmapped: 35119104 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128581632 unmapped: 35119104 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128581632 unmapped: 35119104 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128581632 unmapped: 35119104 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128581632 unmapped: 35119104 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128581632 unmapped: 35119104 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128581632 unmapped: 35119104 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128589824 unmapped: 35110912 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128589824 unmapped: 35110912 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128589824 unmapped: 35110912 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128589824 unmapped: 35110912 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128589824 unmapped: 35110912 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128589824 unmapped: 35110912 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128589824 unmapped: 35110912 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128589824 unmapped: 35110912 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128589824 unmapped: 35110912 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128589824 unmapped: 35110912 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128589824 unmapped: 35110912 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128589824 unmapped: 35110912 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128589824 unmapped: 35110912 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128589824 unmapped: 35110912 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128589824 unmapped: 35110912 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128589824 unmapped: 35110912 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128598016 unmapped: 35102720 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128598016 unmapped: 35102720 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128598016 unmapped: 35102720 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128598016 unmapped: 35102720 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128598016 unmapped: 35102720 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128598016 unmapped: 35102720 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128598016 unmapped: 35102720 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128598016 unmapped: 35102720 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128598016 unmapped: 35102720 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128598016 unmapped: 35102720 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128598016 unmapped: 35102720 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128598016 unmapped: 35102720 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128598016 unmapped: 35102720 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128598016 unmapped: 35102720 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128598016 unmapped: 35102720 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128606208 unmapped: 35094528 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128606208 unmapped: 35094528 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128606208 unmapped: 35094528 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128606208 unmapped: 35094528 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128606208 unmapped: 35094528 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128606208 unmapped: 35094528 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128606208 unmapped: 35094528 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128606208 unmapped: 35094528 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128606208 unmapped: 35094528 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128606208 unmapped: 35094528 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128606208 unmapped: 35094528 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128606208 unmapped: 35094528 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128606208 unmapped: 35094528 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128606208 unmapped: 35094528 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128606208 unmapped: 35094528 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128606208 unmapped: 35094528 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128614400 unmapped: 35086336 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128614400 unmapped: 35086336 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128614400 unmapped: 35086336 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128614400 unmapped: 35086336 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128614400 unmapped: 35086336 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128614400 unmapped: 35086336 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128614400 unmapped: 35086336 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128614400 unmapped: 35086336 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128614400 unmapped: 35086336 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128614400 unmapped: 35086336 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128614400 unmapped: 35086336 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128614400 unmapped: 35086336 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128614400 unmapped: 35086336 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128622592 unmapped: 35078144 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128622592 unmapped: 35078144 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128622592 unmapped: 35078144 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128622592 unmapped: 35078144 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128622592 unmapped: 35078144 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128622592 unmapped: 35078144 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128622592 unmapped: 35078144 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128622592 unmapped: 35078144 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128622592 unmapped: 35078144 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128622592 unmapped: 35078144 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128622592 unmapped: 35078144 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea05b24000 session 0x55ea06c8b2c0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128630784 unmapped: 35069952 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128630784 unmapped: 35069952 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128630784 unmapped: 35069952 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128630784 unmapped: 35069952 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128630784 unmapped: 35069952 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128630784 unmapped: 35069952 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128630784 unmapped: 35069952 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128630784 unmapped: 35069952 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128630784 unmapped: 35069952 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128630784 unmapped: 35069952 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128630784 unmapped: 35069952 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128630784 unmapped: 35069952 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128630784 unmapped: 35069952 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128630784 unmapped: 35069952 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128630784 unmapped: 35069952 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128630784 unmapped: 35069952 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128638976 unmapped: 35061760 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128638976 unmapped: 35061760 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128638976 unmapped: 35061760 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128638976 unmapped: 35061760 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128638976 unmapped: 35061760 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128638976 unmapped: 35061760 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128638976 unmapped: 35061760 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128638976 unmapped: 35061760 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128638976 unmapped: 35061760 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128638976 unmapped: 35061760 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128638976 unmapped: 35061760 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128647168 unmapped: 35053568 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128647168 unmapped: 35053568 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128647168 unmapped: 35053568 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128647168 unmapped: 35053568 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128647168 unmapped: 35053568 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128647168 unmapped: 35053568 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128647168 unmapped: 35053568 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128647168 unmapped: 35053568 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128647168 unmapped: 35053568 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128647168 unmapped: 35053568 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128647168 unmapped: 35053568 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128647168 unmapped: 35053568 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128647168 unmapped: 35053568 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128647168 unmapped: 35053568 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128655360 unmapped: 35045376 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128655360 unmapped: 35045376 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128655360 unmapped: 35045376 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128655360 unmapped: 35045376 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128655360 unmapped: 35045376 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128655360 unmapped: 35045376 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128655360 unmapped: 35045376 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128655360 unmapped: 35045376 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128655360 unmapped: 35045376 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128655360 unmapped: 35045376 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128655360 unmapped: 35045376 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128655360 unmapped: 35045376 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128655360 unmapped: 35045376 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128655360 unmapped: 35045376 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128663552 unmapped: 35037184 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128663552 unmapped: 35037184 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128663552 unmapped: 35037184 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128663552 unmapped: 35037184 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128663552 unmapped: 35037184 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128663552 unmapped: 35037184 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128663552 unmapped: 35037184 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128663552 unmapped: 35037184 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128663552 unmapped: 35037184 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128663552 unmapped: 35037184 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128663552 unmapped: 35037184 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128663552 unmapped: 35037184 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128663552 unmapped: 35037184 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128663552 unmapped: 35037184 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128663552 unmapped: 35037184 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128663552 unmapped: 35037184 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128671744 unmapped: 35028992 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128671744 unmapped: 35028992 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128671744 unmapped: 35028992 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128671744 unmapped: 35028992 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128671744 unmapped: 35028992 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128671744 unmapped: 35028992 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128671744 unmapped: 35028992 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128671744 unmapped: 35028992 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128671744 unmapped: 35028992 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128671744 unmapped: 35028992 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128671744 unmapped: 35028992 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128671744 unmapped: 35028992 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128679936 unmapped: 35020800 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128679936 unmapped: 35020800 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128679936 unmapped: 35020800 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128679936 unmapped: 35020800 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128679936 unmapped: 35020800 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128679936 unmapped: 35020800 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128679936 unmapped: 35020800 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128679936 unmapped: 35020800 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128679936 unmapped: 35020800 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128679936 unmapped: 35020800 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128679936 unmapped: 35020800 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128679936 unmapped: 35020800 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128679936 unmapped: 35020800 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128679936 unmapped: 35020800 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128679936 unmapped: 35020800 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: mgrc ms_handle_reset ms_handle_reset con 0x55ea044cc800
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1444264366
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1444264366,v1:192.168.122.100:6801/1444264366]
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: mgrc handle_mgr_configure stats_period=5
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128688128 unmapped: 35012608 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea045dbc00 session 0x55ea04d563c0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 ms_handle_reset con 0x55ea04ac9c00 session 0x55ea045e45a0
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128688128 unmapped: 35012608 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128688128 unmapped: 35012608 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128688128 unmapped: 35012608 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128688128 unmapped: 35012608 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128688128 unmapped: 35012608 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128688128 unmapped: 35012608 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128688128 unmapped: 35012608 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128688128 unmapped: 35012608 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128688128 unmapped: 35012608 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128688128 unmapped: 35012608 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128688128 unmapped: 35012608 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128139264 unmapped: 35561472 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: do_command 'config diff' '{prefix=config diff}'
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: do_command 'config show' '{prefix=config show}'
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xdd7d3a/0xe98000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: do_command 'counter dump' '{prefix=counter dump}'
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: do_command 'counter schema' '{prefix=counter schema}'
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128393216 unmapped: 35307520 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: prioritycache tune_memory target: 4294967296 mapped: 128557056 unmapped: 35143680 heap: 163700736 old mem: 2845415832 new mem: 2845415832
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258080 data_alloc: 218103808 data_used: 10764288
Dec  1 05:37:55 np0005540825 ceph-osd[82809]: do_command 'log dump' '{prefix=log dump}'
Dec  1 05:37:55 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:55 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:37:55 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:37:55.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:37:55 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.17712 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:55 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec  1 05:37:55 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/694297750' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec  1 05:37:55 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27079 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:55 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27599 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:56 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  1 05:37:56 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3319515747' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  1 05:37:56 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.17727 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:56 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27088 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:56 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27611 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:56 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:37:56 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.17742 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:56 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec  1 05:37:56 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/30622397' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec  1 05:37:56 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27103 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:37:56 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1427: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:37:56 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27623 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:56 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.17754 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:37:57 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Dec  1 05:37:57 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27115 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:37:57 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2123307550' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec  1 05:37:57 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27638 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:37:57 np0005540825 podman[299907]: 2025-12-01 10:37:57.211683722 +0000 UTC m=+0.070453521 container health_status 7bc939aafd4a9acefa197f1f92457d0f5b521b8583bef6fbc66b0de7852b0239 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  1 05:37:57 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.17760 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:37:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:37:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:37:57.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:37:57 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27127 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:37:57 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:37:57.437Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:37:57 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27656 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:37:57 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:57 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:37:57 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:37:57.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:37:57 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.17769 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:37:57 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27151 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:37:57 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27674 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:37:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Dec  1 05:37:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3662145449' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec  1 05:37:58 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.17790 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:37:58 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27689 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:37:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Dec  1 05:37:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1113212562' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec  1 05:37:58 np0005540825 nova_compute[256151]: 2025-12-01 10:37:58.471 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:37:58 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.17802 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:37:58 np0005540825 nova_compute[256151]: 2025-12-01 10:37:58.567 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:37:58 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1428: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:37:58 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Dec  1 05:37:58 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/280147729' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec  1 05:37:58 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:37:58.941Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:37:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:58 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:37:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:59 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:37:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:59 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:37:59 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:37:59 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:37:59 np0005540825 nova_compute[256151]: 2025-12-01 10:37:59.027 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:37:59 np0005540825 nova_compute[256151]: 2025-12-01 10:37:59.027 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 05:37:59 np0005540825 nova_compute[256151]: 2025-12-01 10:37:59.027 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 05:37:59 np0005540825 nova_compute[256151]: 2025-12-01 10:37:59.043 256155 DEBUG nova.compute.manager [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 05:37:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Dec  1 05:37:59 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3109756647' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec  1 05:37:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Dec  1 05:37:59 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4272453431' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec  1 05:37:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:37:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:37:59.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:37:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Dec  1 05:37:59 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2547643522' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec  1 05:37:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Dec  1 05:37:59 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1766030144' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec  1 05:37:59 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:37:59 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:37:59 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:37:59.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:37:59 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Dec  1 05:37:59 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1097426543' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec  1 05:38:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Dec  1 05:38:00 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/92804374' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec  1 05:38:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Dec  1 05:38:00 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2427246299' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec  1 05:38:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Dec  1 05:38:00 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1684233470' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec  1 05:38:00 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1429: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:38:00 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27262 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:38:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec  1 05:38:00 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3119880444' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec  1 05:38:00 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Dec  1 05:38:00 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/143276294' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec  1 05:38:01 np0005540825 systemd[1]: Starting Hostname Service...
Dec  1 05:38:01 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27277 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:38:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Dec  1 05:38:01 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3099532193' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec  1 05:38:01 np0005540825 systemd[1]: Started Hostname Service.
Dec  1 05:38:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Dec  1 05:38:01 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2471595939' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec  1 05:38:01 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-mgr-compute-0-fospow[74705]: ::ffff:192.168.122.100 - - [01/Dec/2025:10:38:01] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec  1 05:38:01 np0005540825 ceph-mgr[74709]: [prometheus INFO cherrypy.access.139999370520560] ::ffff:192.168.122.100 - - [01/Dec/2025:10:38:01] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Dec  1 05:38:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:38:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000027s ======
Dec  1 05:38:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:38:01.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  1 05:38:01 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27794 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:38:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:38:01 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:38:01 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:38:01 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:38:01.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:38:01 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27806 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:38:01 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.17910 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:38:01 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Dec  1 05:38:01 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2036427775' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec  1 05:38:01 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27812 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:38:02 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27298 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:38:02 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27818 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:38:02 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.17925 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:38:02 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.17922 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:38:02 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27313 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:38:02 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27836 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:38:02 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1430: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:38:02 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.17931 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:38:02 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27325 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:38:02 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27848 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:38:03 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.17943 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:38:03 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Dec  1 05:38:03 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/360571178' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec  1 05:38:03 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27863 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:38:03 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27337 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:38:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:38:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:38:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:38:03.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:38:03 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.17955 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:38:03 np0005540825 nova_compute[256151]: 2025-12-01 10:38:03.495 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:38:03 np0005540825 nova_compute[256151]: 2025-12-01 10:38:03.569 256155 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 05:38:03 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec  1 05:38:03 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec  1 05:38:03 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27349 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:38:03 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27878 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:38:03 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:38:03 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  1 05:38:03 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:38:03.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  1 05:38:03 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Dec  1 05:38:03 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/995261930' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec  1 05:38:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:38:03.810Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec  1 05:38:03 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-alertmanager-compute-0[105351]: ts=2025-12-01T10:38:03.811Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec  1 05:38:03 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.17976 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:38:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:38:03 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  1 05:38:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:38:04 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  1 05:38:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:38:04 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  1 05:38:04 np0005540825 ceph-365f19c2-81e5-5edd-b6b4-280555214d3a-nfs-cephfs-2-0-compute-0-pytvsu[265949]: 01/12/2025 10:38:04 : epoch 692d6b3d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  1 05:38:04 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27890 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:38:04 np0005540825 nova_compute[256151]: 2025-12-01 10:38:04.026 256155 DEBUG oslo_service.periodic_task [None req-52f4bf38-1936-4883-9f4d-fb4630320458 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 05:38:04 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27379 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:38:04 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec  1 05:38:04 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec  1 05:38:04 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.17988 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:38:04 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27917 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:38:04 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Dec  1 05:38:04 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2492578862' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec  1 05:38:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:38:04.597 163291 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 05:38:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:38:04.598 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 05:38:04 np0005540825 ovn_metadata_agent[163286]: 2025-12-01 10:38:04.598 163291 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 05:38:04 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1431: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  1 05:38:04 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.18018 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  1 05:38:04 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec  1 05:38:04 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec  1 05:38:04 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27409 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:38:05 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27947 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:38:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:38:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:38:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.100 - anonymous [01/Dec/2025:10:38:05.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:38:05 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Dec  1 05:38:05 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1501445425' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec  1 05:38:05 np0005540825 radosgw[94538]: ====== starting new request req=0x7fdfdc9835d0 =====
Dec  1 05:38:05 np0005540825 radosgw[94538]: ====== req done req=0x7fdfdc9835d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  1 05:38:05 np0005540825 radosgw[94538]: beast: 0x7fdfdc9835d0: 192.168.122.102 - anonymous [01/Dec/2025:10:38:05.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  1 05:38:05 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.18057 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  1 05:38:06 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Dec  1 05:38:06 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3059862068' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec  1 05:38:06 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  1 05:38:06 np0005540825 ceph-mgr[74709]: log_channel(cluster) log [DBG] : pgmap v1432: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec  1 05:38:06 np0005540825 ceph-mon[74416]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0)
Dec  1 05:38:06 np0005540825 ceph-mon[74416]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1497804764' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec  1 05:38:06 np0005540825 ceph-mgr[74709]: log_channel(audit) log [DBG] : from='client.27454 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
